langchain
API Reference¶
langchain.adapters
¶
Classes¶
Chat completion. |
Functions¶
|
Async version of enumerate function. |
Convert a dictionary to a LangChain message. |
|
Convert a LangChain message to a dictionary. |
|
Convert messages to a list of lists of dictionaries for fine-tuning. |
|
|
Convert dictionaries representing OpenAI messages to LangChain format. |
langchain.agents
¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes¶
Agent that calls the language model and deciding the action. |
|
Agent that is using tools. |
|
Base class for parsing agent output into agent action/finish. |
|
Base Multi Action Agent class. |
|
Base Single Action Agent class. |
|
Tool that just returns the query. |
|
Base class for single action agents. |
|
Base class for parsing agent output into agent actions/finish. |
|
Agent powered by runnables. |
|
Agent powered by runnables. |
|
Iterator for AgentExecutor. |
|
Base class for AgentExecutorIterator. |
|
Toolkit for interacting with AINetwork Blockchain. |
|
Toolkit for interacting with Amadeus which offers APIs for travel search. |
|
|
Toolkit for Azure Cognitive Services. |
Base Toolkit representing a collection of related tools. |
|
Clickup Toolkit. |
|
|
Toolkit for interacting with local files. |
GitHub Toolkit. |
|
GitLab Toolkit. |
|
Toolkit for interacting with Gmail. |
|
Jira Toolkit. |
|
Toolkit for interacting with a JSON spec. |
|
Toolkit for interacting with the Browser Agent. |
|
Natural Language API Tool. |
|
Natural Language API Toolkit. |
|
Toolkit for interacting with Office 365. |
|
|
A tool that sends a DELETE request and parses the response. |
|
Requests GET tool with LLM-instructed extraction of truncated responses. |
|
Requests PATCH tool with LLM-instructed extraction of truncated responses. |
|
Requests POST tool with LLM-instructed extraction of truncated responses. |
|
Requests PUT tool with LLM-instructed extraction of truncated responses. |
A reduced OpenAPI spec. |
|
Toolkit for interacting with an OpenAPI API. |
|
Toolkit for making REST requests. |
|
|
Toolkit for PlayWright browser tools. |
Toolkit for interacting with Power BI dataset. |
|
Toolkit for interacting with Spark SQL. |
|
Toolkit for interacting with SQL databases. |
|
Information about a VectorStore. |
|
|
Toolkit for routing between Vector Stores. |
|
Toolkit for interacting with a Vector Store. |
Zapier Toolkit. |
|
|
An enum for agent types. |
Chat Agent. |
|
Output parser for the chat agent. |
|
An agent that holds a conversation in addition to using tools. |
|
Output parser for the conversational agent. |
|
An agent designed to hold a conversation in addition to using tools. |
|
Output parser for the conversational agent. |
|
|
Configuration for chain to use in MRKL system. |
[Deprecated] Chain that implements the MRKL system. |
|
Agent for the MRKL chain. |
|
MRKL Output parser for the chat agent. |
|
AgentAction with info needed to submit custom tool output to existing run. |
|
AgentFinish with run and thread metadata. |
|
Run an OpenAI Assistant. |
|
|
Memory used to save agent output AND intermediate steps. |
An Agent driven by OpenAIs function powered API. |
|
|
An Agent driven by OpenAIs function powered API. |
Parses tool invocations and final answers in JSON format. |
|
|
Parses a message into agent action/finish. |
Override init to support instantiation by position for backward compat. |
|
|
Parses a message into agent actions/finish. |
|
Parses ReAct-style LLM calls that have a single tool input in json format. |
|
Parses ReAct-style LLM calls that have a single tool input. |
Parses self-ask style LLM calls. |
|
Parses tool invocations and final answers in XML format. |
|
|
Class to assist with exploration of a document store. |
[Deprecated] Chain that implements the ReAct paper. |
|
Agent for the ReAct chain. |
|
Agent for the ReAct TextWorld chain. |
|
Output parser for the ReAct agent. |
|
Chat prompt template for the agent scratchpad. |
|
Agent for the self-ask-with-search paper. |
|
[Deprecated] Chain that does self-ask with search. |
|
Structured Chat Agent. |
|
|
Output parser for the structured chat agent. |
|
Output parser with retries for the structured chat agent. |
Tool that is run when invalid tool name is encountered by agent. |
|
Agent that uses XML tags. |
Functions¶
Decorator to force setters to rebuild callback mgr |
|
A convenience method for creating a conversational retrieval agent. |
|
Construct a json agent from an LLM and tools. |
|
|
Construct an OpenAPI agent from an LLM and tools. |
|
Instantiate OpenAI API planner and controller for a given spec. |
|
Simplify/distill/minify a spec somehow. |
Construct a Power BI agent from an LLM and tools. |
|
|
Construct a Power BI agent from a Chat LLM and tools. |
|
Construct a Spark SQL agent from an LLM and tools. |
Construct an SQL agent from an LLM and tools. |
|
|
Construct a VectorStore agent from an LLM and tools. |
|
Construct a VectorStore router agent from an LLM and tools. |
Construct the scratchpad that lets the agent continue its thought process. |
|
|
Construct the scratchpad that lets the agent continue its thought process. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
Format the intermediate steps as XML. |
|
|
Load an agent executor given tools and LLM. |
Get a list of all possible tool names. |
|
Loads a tool from the HuggingFace Hub. |
|
|
Load tools based on their name. |
|
Unified method for loading an agent from LangChainHub or local fs. |
Load agent from Config Dict. |
|
|
Parse an AI message potentially containing tool_calls. |
Validate tools for single input. |
langchain.agents.format_scratchpad
¶
Logic for formatting intermediate steps into an agent scratchpad.
Intermediate steps refers to the list of (AgentAction, observation) tuples that result from previous iterations of the agent. Depending on the prompting strategy you are using, you may want to format these differently before passing them into the LLM.
Functions¶
Construct the scratchpad that lets the agent continue its thought process. |
|
|
Construct the scratchpad that lets the agent continue its thought process. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
|
Convert (AgentAction, tool output) tuples into FunctionMessages. |
Format the intermediate steps as XML. |
langchain.agents.output_parsers
¶
Parsing utils to go from string to AgentAction or Agent Finish.
AgentAction means that an action should be taken. This contains the name of the tool to use, the input to pass to that tool, and a log variable (which contains a log of the agent’s thinking).
AgentFinish means that a response should be given. This contains a return_values dictionary. This usually contains a single output key, but can be extended to contain more. This also contains a log variable (which contains a log of the agent’s thinking).
Classes¶
Parses tool invocations and final answers in JSON format. |
|
|
Parses a message into agent action/finish. |
Override init to support instantiation by position for backward compat. |
|
|
Parses a message into agent actions/finish. |
|
Parses ReAct-style LLM calls that have a single tool input in json format. |
|
Parses ReAct-style LLM calls that have a single tool input. |
Parses self-ask style LLM calls. |
|
Parses tool invocations and final answers in XML format. |
Functions¶
|
Parse an AI message potentially containing tool_calls. |
langchain.cache
¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
|
Cache that uses Cassandra / Astra DB as a backend. |
|
Cache that uses Cassandra as a vector-store backend for semantic (i.e. |
|
SQLite table for full LLM Cache (all generations). |
|
SQLite table for full LLM Cache (all generations). |
|
Cache that uses GPTCache as a backend. |
Cache that stores things in memory. |
|
|
Cache that uses Momento as a backend. |
|
Cache that uses Redis as a backend. |
|
Cache that uses Redis as a vector-store backend. |
|
Cache that uses SQAlchemy as a backend. |
|
Cache that uses SQAlchemy as a backend. |
|
Cache that uses SQLite as a backend. |
|
Cache that uses Upstash Redis as a backend. |
Functions¶
langchain.callbacks
¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
Callback Handler that logs to Aim. |
|
This class handles the metadata and associated function states for callbacks. |
|
Callback Handler that logs into Argilla. |
|
Callback Handler that logs to Arize. |
|
Callback Handler that logs to Arthur platform. |
|
Callback Handler that logs to ClearML. |
|
Callback Handler that logs to Comet. |
|
|
Callback Handler that logs into deepeval. |
Callback Handler that records transcripts to the Context service. |
|
|
Callback Handler that writes to a file. |
This callback handler that is used within a Flyte task. |
|
Callback for manually validating values. |
|
Exception to raise when a person manually review and rejects a value. |
|
Callback Handler that logs to Infino. |
|
|
Label Studio callback handler. |
Label Studio mode enumerator. |
|
|
Callback Handler for LLMonitor`. |
Context manager for LLMonitor user context. |
|
Callback Handler that logs metrics and artifacts to mlflow server. |
|
|
Callback Handler that logs metrics and artifacts to mlflow server. |
Callback Handler that tracks OpenAI info. |
|
|
Callback handler for promptlayer. |
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments. |
|
Callback handler that returns an async iterator. |
|
|
Callback handler that returns an async iterator. |
|
Callback handler for streaming in agents. |
The child record as a NamedTuple. |
|
The enumerator of the child type. |
|
A Streamlit expander that can be renamed and dynamically expanded/collapsed. |
|
|
A thought in the LLM's thought stream. |
|
Generates markdown labels for LLMThought containers. |
|
Enumerator of the LLMThought state. |
|
A callback handler that writes to a Streamlit app. |
|
The tool record as a NamedTuple. |
Handles the conversion of a LangChain Runs into a WBTraceTree. |
|
Arguments for the WandbTracer. |
|
|
Callback Handler that logs to Weights and Biases. |
Callback handler for Trubrics. |
|
This class handles the metadata and associated function states for callbacks. |
|
Callback Handler that logs to Weights and Biases. |
|
Callback Handler for logging to WhyLabs. |
Functions¶
Import the aim python package and raise an error if it is not installed. |
|
Import the clearml python package and raise an error if it is not installed. |
|
Import comet_ml and raise an error if it is not installed. |
|
Import the getcontext package. |
|
Analyze text using textstat and spacy. |
|
Import flytekit and flytekitplugins-deck-standard. |
|
Calculate num tokens for OpenAI with tiktoken package. |
|
Import the infino client. |
|
Import tiktoken for counting tokens for OpenAI models. |
|
|
Get default Label Studio configs for the given mode. |
Builds an LLMonitor UserContextManager |
|
Get the OpenAI callback handler in a context manager. |
|
Get the WandbTracer in a context manager. |
|
Analyze text using textstat and spacy. |
|
|
Construct an html element from a prompt and a generation. |
Import the mlflow python package and raise an error if it is not installed. |
|
Get the cost in USD for a given model and number of tokens. |
|
Standardize the model name to a format that can be used in the OpenAI API. |
|
|
Save dict to local file path. |
|
Flattens a nested dictionary into a flat dictionary. |
Hash a string using sha1. |
|
Import the pandas python package and raise an error if it is not installed. |
|
Import the spacy python package and raise an error if it is not installed. |
|
Import the textstat python package and raise an error if it is not installed. |
|
|
Load json file to a string. |
Analyze text using textstat and spacy. |
|
|
Construct an html element from a prompt and a generation. |
Import the wandb python package and raise an error if it is not installed. |
|
Load json file to a dictionary. |
|
Import the langkit python package and raise an error if it is not installed. |
langchain.chains
¶
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers, other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
Stateful: add Memory to any Chain to give it state,
Observable: pass Callbacks to a Chain to execute additional functionality, like logging, outside the main sequence of component calls,
Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
Classes¶
Chain that makes API calls and summarizes the responses to answer a question. |
|
Chain interacts with an OpenAPI endpoint using natural language. |
|
Get the request parser. |
|
Parse the request and error tags. |
|
Get the response parser. |
|
Parse the response and error tags. |
|
Abstract base class for creating structured sequences of calls to components. |
|
Chain that splits documents, then analyzes it in pieces. |
|
Base interface for chains combining documents. |
|
Combining documents by mapping a chain over them, then combining results. |
|
Combining documents by mapping a chain over them, then reranking results. |
|
|
Interface for the combine_docs method. |
Interface for the combine_docs method. |
|
Combine documents by recursively reducing them. |
|
Combine documents by doing a first pass and then refining on more documents. |
|
Chain that combines documents by stuffing into context. |
|
Chain for applying constitutional principles. |
|
Class for a constitutional principle. |
|
Chain to have a conversation and load context from memory. |
|
|
Chain for chatting with an index. |
Chain for chatting with a vector database. |
|
|
Chain for having a conversation based on retrieved documents. |
Create a new model by parsing and validating input data from keyword arguments. |
|
|
Chain for interacting with Elasticsearch Database. |
Chain that combines a retriever, a question generator, and a response generator. |
|
Chain that generates questions from uncertain spans. |
|
Output parser that checks if the output is finished. |
|
Chain for question-answering against a graph by generating AQL statements. |
|
Chain for question-answering against a graph. |
|
Chain for question-answering against a graph by generating Cypher statements. |
|
Used to correct relationship direction in generated Cypher statements. |
|
Create new instance of Schema(left_node, relation, right_node) |
|
Chain for question-answering against a graph by generating Cypher statements. |
|
Chain for question-answering against a graph by generating gremlin statements. |
|
Question-answering against a graph by generating Cypher statements for Kùzu. |
|
Chain for question-answering against a graph by generating nGQL statements. |
|
Chain for question-answering against a Neptune graph by generating openCypher statements. |
|
Question-answering against an RDF or OWL graph by generating SPARQL statements. |
|
Generate hypothetical document for query, and then embed that. |
|
Chain to run queries against LLMs. |
|
Chain for question-answering with self-verification. |
|
Chain that interprets a prompt and executes python code to do math. |
|
Chain that requests a URL and then uses an LLM to parse results. |
|
|
Chain for question-answering with self-verification. |
Map-reduce chain. |
|
Pass input through a moderation endpoint. |
|
Implement an LLM driven browser. |
|
A crawler for web pages. |
|
A typed dictionary containing information about elements in the viewport. |
|
|
Class representing a single statement. |
A question and its answer as a list of facts each one should have a source. |
|
Chain for making a simple request to an API endpoint. |
|
An answer to the question, with sources. |
|
Base class for prompt selectors. |
|
Prompt collection that goes through conditionals. |
|
Base class for question-answer generation chains. |
|
Question answering chain with sources over documents. |
|
Question answering with sources over documents. |
|
Interface for loading the combine documents chain. |
|
|
Question-answering with sources over an index. |
Question-answering with sources over a vector database. |
|
Output parser that parses a structured query. |
|
Enumerator of the comparison operators. |
|
A comparison to a value. |
|
Base class for all expressions. |
|
A filtering expression. |
|
A logical operation over other directives. |
|
Enumerator of the operations. |
|
A structured query. |
|
Defines interface for IR translation using visitor pattern. |
|
A date in ISO 8601 format (YYYY-MM-DD). |
|
Information about a data source attribute. |
|
Base class for question-answering chains. |
|
Chain for question-answering against an index. |
|
Chain for question-answering against a vector database. |
|
Use a single chain to route an input to one of multiple candidate chains. |
|
|
Create new instance of Route(destination, next_inputs) |
Chain that outputs the name of a destination chain and the inputs to it. |
|
Chain that uses embeddings to route between options. |
|
A router chain that uses an LLM chain to perform routing. |
|
Parser for output of router chain in the multi-prompt chain. |
|
A multi-route chain that uses an LLM router chain to choose amongst prompts. |
|
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains. |
|
Chain where the outputs of one chain feed directly into next. |
|
Simple chain where the outputs of one step feed directly into next. |
|
Input for a SQL Chain. |
|
Input for a SQL Chain. |
|
Chain that transforms the chain output. |
Functions¶
Execute a collapse function on a set of documents and merge their metadatas. |
|
Execute a collapse function on a set of documents and merge their metadatas. |
|
Split Documents into subsets that each meet a cumulative length constraint. |
|
|
Convert a Python function to an Ernie function-calling API compatible dict. |
Convert a raw function/class to an Ernie function. |
|
[Legacy] Create an LLM chain that uses Ernie functions. |
|
Create a runnable sequence that uses Ernie functions. |
|
|
[Legacy] Create an LLMChain that uses an Ernie function to get a structured output. |
|
Create a runnable that uses an Ernie function to get a structured output. |
Get the appropriate function output parser given the user functions. |
|
Return another example given a list of examples for a prompt. |
|
Filter the schema based on included or excluded types |
|
Extract Cypher code from a text. |
|
Extract Cypher code from a text. |
|
Extract Cypher code from text using Regex. |
|
Trim the query to only include Cypher keywords. |
|
Decides whether to use the simple prompt |
|
|
Unified method for loading a chain from LangChainHub or local fs. |
Load chain from Config Dict. |
|
|
Convert a Python function to an OpenAI function-calling API compatible dict. |
|
Convert a raw function/class to an OpenAI function. |
[Legacy] Create an LLM chain that uses OpenAI functions. |
|
Create a runnable sequence that uses OpenAI functions. |
|
|
[Legacy] Create an LLMChain that uses an OpenAI function to get a structured output. |
|
Create a runnable that uses an OpenAI function to get a structured output. |
Get the appropriate function output parser given the user functions. |
|
|
Create a citation fuzzy match chain. |
|
Creates a chain that extracts information from a passage. |
|
Creates a chain that extracts information from a passage using pydantic schema. |
Create a chain for querying an API from a OpenAPI spec. |
|
|
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI |
|
Create a question answering chain that returns an answer with sources. |
|
Create a question answering chain that returns an answer with sources |
Creates a chain that extracts information from a passage |
|
|
Creates a chain that extracts information from a passage |
Returns the kwargs for the LLMChain constructor. |
|
|
|
Check if the language model is a chat model. |
|
Check if the language model is a LLM. |
|
|
Load a question answering with sources chain. |
Construct examples from input-output pairs. |
|
Fix invalid filter directive. |
|
|
Create query construction prompt. |
|
Load a query constructor chain. |
|
Load a query constructor runnable chain. |
Returns a parser for the query language. |
|
Dummy decorator for when lark is not installed. |
|
Create a chain that generates SQL queries. |
langchain.chat_loaders
¶
Chat Loaders load chat messages from common communications platforms.
Load chat messages from various communications platforms such as Facebook Messenger, Telegram, and WhatsApp. The loaded chat messages can be used for fine-tuning models.
Class hierarchy:
BaseChatLoader --> <name>ChatLoader # Examples: WhatsAppChatLoader, IMessageChatLoader
Main helpers:
ChatSession
Classes¶
Base class for chat loaders. |
|
|
Load Facebook Messenger chat data from a folder. |
|
Load Facebook Messenger chat data from a single file. |
|
Load data from GMail. |
Load chat sessions from the iMessage chat.db SQLite file. |
|
Load chat sessions from a LangSmith dataset with the "chat" data type. |
|
Load chat sessions from a list of LangSmith "llm" runs. |
|
Load Slack conversations from a dump zip file. |
|
Load telegram conversations to LangChain chat messages. |
|
Load WhatsApp conversations from a dump zip file or directory. |
Functions¶
Convert messages from the specified 'sender' to AI messages. |
|
Convert messages from the specified 'sender' to AI messages. |
|
|
Merge chat runs together. |
Merge chat runs together in a chat session. |
langchain.chat_models
¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
Anthropic chat large language models. |
|
Anyscale Chat large language models. |
|
Azure OpenAI Chat Completion API. |
|
AzureML Chat models API. |
|
Content formatter for LLaMA. |
|
Baichuan chat models API by Baichuan Intelligent Technology. |
|
Baidu Qianfan chat models. |
|
A chat model that uses the Bedrock API. |
|
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects. |
|
Cohere chat large language models. |
|
ERNIE-Bot large language model. |
|
EverlyAI Chat large language models. |
|
Fake ChatModel for testing purposes. |
|
Fake ChatModel for testing purposes. |
|
Fireworks Chat models. |
|
GigaChat large language models API. |
|
Google PaLM Chat models API. |
|
Error with the Google PaLM API. |
|
ChatModel which returns user input as the response. |
|
Tencent Hunyuan chat models API by Tencent. |
|
Javelin AI Gateway chat models API. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Jina AI Chat models API. |
|
ChatKonko Chat large language models API. |
|
A chat model that uses the LiteLLM API. |
|
Error with the LiteLLM I/O library |
|
Wrapper around Minimax large language models. |
|
MLflow AI Gateway chat models API. |
|
Parameters for the MLflow AI Gateway LLM. |
|
Ollama locally runs large language models. |
|
OpenAI Chat large language models API. |
|
Eas LLM Service chat model API. |
|
PromptLayer and OpenAI Chat large language models API. |
|
Alibaba Tongyi Qwen chat models API. |
|
Vertex AI Chat large language models API. |
|
Wrapper around YandexGPT large language models. |
Functions¶
|
Format a list of messages into a full prompt for the Anthropic model |
|
Convert a message to a dictionary that can be passed to the API. |
Get the request for the Cohere chat API. |
|
|
Get the role of the message. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
Use tenacity to retry the completion call. |
|
Convert a dict response to a message. |
|
Use tenacity to retry the async completion call. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the async completion call. |
|
langchain.docstore
¶
Docstores are classes to store and load Documents.
The Docstore is a simplified version of the Document Loader.
Class hierarchy:
Docstore --> <name> # Examples: InMemoryDocstore, Wikipedia
Main helpers:
Document, AddableMixin
Classes¶
|
Langchain Docstore via arbitrary lookup function. |
Mixin class that supports adding texts. |
|
Interface to access to place that stores documents. |
|
|
Simple in memory docstore in the form of a dict. |
Wrapper around wikipedia API. |
langchain.document_loaders
¶
Document Loaders are classes to load Documents.
Document Loaders are usually used to load a lot of Documents in a single run.
Class hierarchy:
BaseLoader --> <name>Loader # Examples: TextLoader, UnstructuredFileLoader
Main helpers:
Document, <name>TextSplitter
Classes¶
|
Load acreom vault from a directory. |
Load with an Airbyte source connector implemented using the CDK. |
|
Load from Gong using an Airbyte source connector. |
|
Load from Hubspot using an Airbyte source connector. |
|
Load from Salesforce using an Airbyte source connector. |
|
Load from Shopify using an Airbyte source connector. |
|
Load from Stripe using an Airbyte source connector. |
|
Load from Typeform using an Airbyte source connector. |
|
Load from Zendesk Support using an Airbyte source connector. |
|
Load local Airbyte json files. |
|
Load the Airtable tables. |
|
Load datasets from Apify web scraping, crawling, and data extraction platform. |
|
Load records from an ArcGIS FeatureLayer. |
|
|
Load a query result from Arxiv. |
|
Loader for AssemblyAI audio transcripts. |
Transcript format to use for the document loader. |
|
Load HTML asynchronously. |
|
Load AZLyrics webpages. |
|
|
Load from Azure Blob Storage container. |
|
Load from Azure Blob Storage files. |
|
Load from Baidu BOS directory. |
|
Load from Baidu Cloud BOS file. |
Abstract interface for blob parsers. |
|
Interface for Document Loader. |
|
Base class for all loaders that uses O365 Package |
|
|
Load a bibtex file. |
Load from the Google Cloud Platform BigQuery. |
|
Load BiliBili video transcripts. |
|
Load a Blackboard course. |
|
|
Load blobs in the local file system. |
Blob represents raw data by either reference or value. |
|
Abstract interface for blob loaders implementation. |
|
|
Load YouTube urls as audio file(s). |
Load elements from a blockchain smart contract. |
|
Enumerator of the supported blockchains. |
|
Load with Brave Search engine. |
|
Load webpages with Browserless /content endpoint. |
|
|
Load conversations from exported ChatGPT data. |
Scrape HTML pages from URLs using a headless instance of the Chromium. |
|
|
Load College Confidential webpages. |
Load and pars Documents concurrently. |
|
Load Confluence pages. |
|
Enumerator of the content formats of Confluence page. |
|
|
Load CoNLL-U files. |
|
Load a CSV file into a list of Documents. |
Load CSV files using Unstructured. |
|
Load Cube semantic layer metadata. |
|
Load Datadog logs. |
|
Initialize with dataframe object. |
|
Load Pandas DataFrame. |
|
Load Diffbot json file. |
|
Load from a directory. |
|
Load Discord chat logs. |
|
Load from Docugami. |
|
Loader that leverages the SitemapLoader to loop through the generated pages of a Docusaurus Documentation website and extracts the content by looking for specific HTML tags. |
|
Load files from Dropbox. |
|
Load from DuckDB. |
|
Loads Outlook Message files using extract_msg. |
|
Load email files using Unstructured. |
|
Base loader for Embaas document extraction API. |
|
Load Embaas blob. |
|
Parameters for the embaas document extraction API. |
|
Payload for the Embaas document extraction API. |
|
Load from Embaas. |
|
Load EPub files using Unstructured. |
|
Load transactions from Ethereum mainnet. |
|
Load from EverNote. |
|
Load Microsoft Excel files using Unstructured. |
|
Load Facebook Chat messages directory dump. |
|
|
Load from FaunaDB. |
Load Figma file. |
|
Load from GCS directory. |
|
Load from GCS file. |
|
Generic Document Loader. |
|
Load geopandas Dataframe. |
|
|
Load Git repository files. |
|
Load GitBook data. |
Load GitHub repository Issues. |
|
Load issues of a GitHub repository. |
|
|
Loader for Google Cloud Speech-to-Text audio transcripts. |
Load Google Docs from Google Drive. |
|
Load from Gutenberg.org. |
|
File encoding as the NamedTuple. |
|
|
Load Hacker News data. |
Load HTML files using Unstructured. |
|
|
Load HTML files and parse them with beautiful soup. |
|
Load from Hugging Face Hub datasets. |
|
Load iFixit repair guides, device wikis and answers. |
Load PNG and JPG files using Unstructured. |
|
Load image captions. |
|
Load IMSDb webpages. |
|
|
Load from IUGU. |
Load notes from Joplin. |
|
Load a JSON file using a jq schema. |
|
|
Load from lakeFS. |
Args: |
|
Load from LarkSuite (FeiShu). |
|
Load Markdown files using Unstructured. |
|
Load the Mastodon 'toots'. |
|
Load from Alibaba Cloud MaxCompute table. |
|
Load MediaWiki dump from an XML file. |
|
Merge documents from a list of loaders |
|
|
Parse MHTML files with BeautifulSoup. |
Load from Modern Treasury. |
|
Load MongoDB documents. |
|
|
Load news articles from URLs using Unstructured. |
Load Jupyter notebook (.ipynb) files. |
|
Load Notion directory dump. |
|
Load from Notion DB. |
|
|
Load from any file type using Nuclia Understanding API. |
Load from Huawei OBS directory. |
|
Load from the Huawei OBS file. |
|
Load Obsidian files from directory. |
|
Load OpenOffice ODT files using Unstructured. |
|
Load from Microsoft OneDrive. |
|
Load a file from Microsoft OneDrive. |
|
Load from Open City. |
|
Load Org-Mode files using Unstructured. |
|
Transcribe and parse audio files. |
|
|
Transcribe and parse audio files with OpenAI Whisper model. |
Transcribe and parse audio files. |
|
Google Cloud Document AI parser. |
|
A dataclass to store Document AI parsing results. |
|
Parser that uses mime-types to parse a blob. |
|
Load article PDF files using Grobid. |
|
Exception raised when the Grobid server is unavailable. |
|
Pparse HTML files using Beautiful Soup. |
|
|
Code segmenter for COBOL. |
|
Abstract class for the code segmenter. |
|
Code segmenter for JavaScript. |
|
Parse using the respective programming language syntax. |
|
Code segmenter for Python. |
Parse the Microsoft Word documents from a blob. |
|
Send PDF files to Amazon Textract and parse them. |
|
|
Loads a PDF with Azure Document Intelligence (formerly Forms Recognizer) and chunks at character level. |
Parse PDF using PDFMiner. |
|
Parse PDF with PDFPlumber. |
|
Parse PDF using PyMuPDF. |
|
Load PDF using pypdf |
|
Parse PDF with PyPDFium2. |
|
Parser for text blobs. |
|
Load PDF files from a local file system, HTTP or S3. |
|
|
Base Loader class for PDF files. |
Loads a PDF with Azure Document Intelligence |
|
|
Load PDF files using Mathpix service. |
|
Load online PDF. |
|
Load PDF files using PDFMiner. |
Load PDF files as HTML content using PDFMiner. |
|
|
Load PDF files using pdfplumber. |
|
Load PDF files using PyMuPDF. |
Load a directory with PDF files using pypdf and chunks at character level. |
|
|
Load PDF using pypdf into list of documents. |
|
Load PDF using pypdfium2 and chunks at character level. |
Load PDF files using Unstructured. |
|
|
Load Polars DataFrame. |
|
Load Microsoft PowerPoint files using Unstructured. |
Load from Psychic.dev. |
|
Load from the PubMed biomedical library. |
|
|
Load PySpark DataFrames. |
|
Load Python files, respecting any non-default encoding if specified. |
|
Load Quip pages. |
Load ReadTheDocs documentation directory. |
|
|
Load all child links from a URL page. |
Load Reddit posts. |
|
Load Roam files from a directory. |
|
Column not found error. |
|
Load from a Rockset database. |
|
|
Loads content from RSpace notebooks, folders, documents or PDF Gallery files into Langchain documents. |
|
Load news articles from RSS feeds using Unstructured. |
Load RST files using Unstructured. |
|
Load RTF files using Unstructured. |
|
Load from Amazon AWS S3 directory. |
|
|
Load from Amazon AWS S3 file. |
Load from SharePoint. |
|
|
Load a sitemap and its URLs. |
Load from a Slack directory dump. |
|
Load from Snowflake API. |
|
Load from Spreedly API. |
|
|
Load .srt (subtitle) files. |
|
Load from Stripe API. |
Load Telegram chat json directory dump. |
|
Load from Telegram chat dump. |
|
|
Load from Tencent Cloud COS directory. |
Load from Tencent Cloud COS file. |
|
|
Load from TensorFlow Dataset. |
|
Load text file. |
Load HTML using 2markdown API. |
|
|
Load TOML files. |
|
Load cards from a Trello board. |
Load TSV files using Unstructured. |
|
Load Twitter tweets. |
|
|
Load files using Unstructured API. |
|
Load files using Unstructured API. |
Base Loader that uses Unstructured. |
|
|
Load files using Unstructured. |
Load files using Unstructured. |
|
Load files from remote URLs using Unstructured. |
|
Abstract base class for all evaluators. |
|
Load HTML pages with Playwright and parse with Unstructured. |
|
|
Evaluates the page HTML content using the unstructured library. |
Load HTML pages with Selenium and parse with Unstructured. |
|
Load weather data with Open Weather Map API. |
|
Load HTML pages using urllib and parse them with `BeautifulSoup'. |
|
Load WhatsApp messages text file. |
|
Load from Wikipedia. |
|
Load DOCX file using docx2txt and chunks at character level. |
|
|
Load Microsoft Word file using Unstructured. |
Load XML file using Unstructured. |
|
Load Xorbits DataFrame. |
|
Generic Google API Client. |
|
Load all Videos from a YouTube Channel. |
|
|
Load YouTube transcripts. |
Functions¶
Fetch the mime types for the specified file types. |
|
Combine message information in a readable format ready to be used. |
|
Combine message information in a readable format ready to be used. |
|
Try to detect the file encoding. |
|
Combine cells information in a readable format ready to be used. |
|
Recursively remove newlines, no matter the data structure they are stored in. |
|
|
Extract text from images with RapidOCR. |
Get a parser by parser name. |
|
Default joiner for content columns. |
|
Combine message information in a readable format ready to be used. |
|
Convert a string or list of strings to a list of Documents with metadata. |
|
Retrieve a list of elements from the Unstructured API. |
|
|
Check if the installed Unstructured version exceeds the minimum version for the feature in question. |
|
Raise an error if the Unstructured version does not exceed the specified minimum. |
Combine message information in a readable format ready to be used. |
langchain.document_transformers
¶
Document Transformers are classes to transform Documents.
Document Transformers usually used to transform a lot of Documents in a single run.
Class hierarchy:
BaseDocumentTransformer --> <name> # Examples: DoctranQATransformer, DoctranTextTranslator
Main helpers:
Document
Classes¶
|
Transform HTML content by extracting specific tags and removing unwanted ones. |
|
Extract properties from text documents using doctran. |
|
Extract QA from text documents using doctran. |
|
Translate text documents using doctran. |
|
Perform K-means clustering on document vectors. |
|
Filter that drops redundant documents by comparing their embeddings. |
|
Translate text documents using Google Cloud Translation. |
Replace occurrences of a particular search pattern with a replacement string |
|
|
Lost in the middle: Performance degrades when models must access relevant information in the middle of long contexts. |
|
The Nuclia Understanding API splits into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all sentences. |
Extract metadata tags from document contents using OpenAI functions. |
Functions¶
|
|
|
Convert a list of documents to a list of documents with state. |
|
Create a DocumentTransformer that uses an OpenAI function chain to automatically |
langchain.embeddings
¶
Embedding models are wrappers around embedding models from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes¶
|
Aleph Alpha's asymmetric semantic embedding. |
The symmetric version of the Aleph Alpha's semantic embeddings. |
|
Embedding documents and queries with Awa DB. |
|
Azure OpenAI Embeddings API. |
|
Baidu Qianfan Embeddings embedding models. |
|
Bedrock embedding models. |
|
Interface for caching results from embedding models. |
|
Clarifai embedding models. |
|
Cohere embedding models. |
|
DashScope embedding models. |
|
Deep Infra's embedding inference service. |
|
EdenAI embedding. |
|
Elasticsearch embedding models. |
|
Embaas's embedding service. |
|
Payload for the Embaas embeddings API. |
|
Ernie Embeddings V1 embedding models. |
|
Fake embedding model that always returns the same embedding vector for the same text. |
|
Fake embedding model. |
|
Qdrant FastEmbedding models. |
|
Google's PaLM Embeddings APIs. |
|
GPT4All embedding models. |
|
Gradient.ai Embedding models. |
|
|
A helper tool to embed Gradient. |
HuggingFace BGE sentence_transformers embedding models. |
|
HuggingFace sentence_transformers embedding models. |
|
Embed texts using the HuggingFace API. |
|
Wrapper around sentence_transformers embedding models. |
|
HuggingFaceHub embedding models. |
|
Wrapper around embeddings LLMs in the Javelin AI Gateway. |
|
Jina embedding models. |
|
JohnSnowLabs embedding models |
|
llama.cpp embedding models. |
|
LLMRails embedding models. |
|
LocalAI embedding models. |
|
MiniMax's embedding service. |
|
Wrapper around embeddings LLMs in the MLflow AI Gateway. |
|
ModelScopeHub embedding models. |
|
MosaicML embedding service. |
|
NLP Cloud embedding models. |
|
OctoAI Compute Service embedding models. |
|
Ollama locally runs large language models. |
|
OpenAI embedding models. |
|
Content handler for LLM class. |
|
Custom Sagemaker Inference Endpoints. |
|
Custom embedding models on self-hosted remote hardware. |
|
|
HuggingFace embedding models on self-hosted remote hardware. |
|
HuggingFace InstructEmbedding models on self-hosted remote hardware. |
Embeddings by SpaCy models. |
|
TensorflowHub embedding models. |
|
Google Cloud VertexAI embedding models. |
|
Voyage embedding models. |
|
Xinference embedding models. |
Functions¶
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the embedding call. |
|
Use tenacity to retry the embedding call. |
|
|
Load the embedding model. |
Use tenacity to retry the embedding call. |
langchain.evaluation
¶
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators
or
load_evaluator
functions with the
names of the evaluators to load.
from langchain.evaluation import load_evaluator
evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
prediction="We sold more than 40,000 units last week",
input="How many units did we sell last week?",
reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType
.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset
function with the
name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
Grading the accuracy of a response against ground truth answers:
QAEvalChain
Comparing the output of two models:
PairwiseStringEvalChain
orLabeledPairwiseStringEvalChain
when there is additionally a reference label.Judging the efficacy of an agent’s tool usage:
TrajectoryEvalChain
Checking whether an output complies with a set of criteria:
CriteriaEvalChain
orLabeledCriteriaEvalChain
when there is additionally a reference label.Computing semantic difference between a prediction and reference:
EmbeddingDistanceEvalChain
or between two predictions:PairwiseEmbeddingDistanceEvalChain
Measuring the string distance between a prediction and reference
StringDistanceEvalChain
or between two predictionsPairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
StringEvaluator
: Evaluate a prediction string against a reference label and/or input context.PairwiseStringEvaluator
: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.AgentTrajectoryEvaluator
Evaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes¶
A named tuple containing the score and reasoning for a trajectory. |
|
A chain for evaluating ReAct style agents. |
|
|
Trajectory output parser. |
|
A chain for comparing two outputs, such as the outputs |
A chain for comparing two outputs, such as the outputs |
|
|
A parser for the output of the PairwiseStringEvalChain. |
A Criteria to evaluate. |
|
LLM Chain for evaluating runs against criteria. |
|
A parser for the output of the CriteriaEvalChain. |
|
Criteria evaluation chain that requires references. |
|
Embedding Distance Metric. |
|
|
Use embedding distances to score semantic difference between a prediction and reference. |
|
Use embedding distances to score semantic difference between two predictions. |
Compute an exact match between the prediction and the reference. |
|
Evaluates whether the prediction is equal to the reference after |
|
Evaluates whether the prediction is valid JSON. |
|
|
An evaluator that calculates the edit distance between JSON strings. |
An evaluator that validates a JSON prediction against a JSON schema reference. |
|
LLM Chain for evaluating QA w/o GT based on context |
|
LLM Chain for evaluating QA using chain of thought reasoning. |
|
LLM Chain for evaluating question answering. |
|
LLM Chain for generating examples for question answering. |
|
Compute a regex match between the prediction and the reference. |
|
Interface for evaluating agent trajectories. |
|
|
The types of the evaluators. |
A base class for evaluators that use an LLM. |
|
Compare the output of two models (or two outputs of the same model). |
|
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels. |
|
A chain for scoring the output of a model on a scale of 1-10. |
|
A chain for scoring on a scale of 1-10 the output of a model. |
|
A parser for the output of the ScoreStringEvalChain. |
|
|
Compute string edit distances between two predictions. |
Distance metric to use. |
|
Compute string distances between the prediction and the reference. |
Functions¶
|
Resolve the criteria for the pairwise evaluator. |
Resolve the criteria to evaluate. |
|
Load a dataset from the LangChainDatasets on HuggingFace. |
|
|
Load the requested evaluation chain specified by a string. |
|
Load evaluators specified by a list of evaluator types. |
Resolve the criteria for the pairwise evaluator. |
langchain.graphs
¶
Graphs provide a natural language interface to graph databases.
Classes¶
ArangoDB wrapper for graph operations. |
|
|
FalkorDB wrapper for graph operations. |
Represents a graph document consisting of nodes and relationships. |
|
Represents a node in a graph with associated properties. |
|
Represents a directed relationship between two nodes in a graph. |
|
An abstract class wrapper for graph operations. |
|
|
HugeGraph wrapper for graph operations. |
|
Kùzu wrapper for graph operations. |
|
Memgraph wrapper for graph operations. |
|
NebulaGraph wrapper for graph operations. |
|
Neo4j wrapper for graph operations. |
|
Neptune wrapper for graph operations. |
A class to handle queries that fail to execute |
|
A triple in the graph. |
|
Networkx wrapper for entity graph operations. |
|
|
RDFlib wrapper for graph operations. |
Functions¶
Get the Arango DB client from credentials. |
|
|
Extract entities from entity string. |
Parse knowledge triples from the knowledge string. |
langchain.hub
¶
Interface with the LangChain Hub.
Functions¶
|
Pulls an object from the hub and returns it as a LangChain object. |
|
Pushes an object to the hub and returns the URL it can be viewed at in a browser. |
langchain.indexes
¶
Code to support various indexing workflows.
Provides code to:
Create knowledge graphs from data.
Support indexing workflows from LangChain data loaders to vectorstores.
For indexing workflows, this code is used to avoid writing duplicated content into the vectostore and to avoid over-writing content if it’s unchanged.
Importantly, this keeps on working even if the content being written is derived via a set of transformations from some source content (e.g., indexing children documents that were derived from parent documents by chunking.)
Classes¶
|
An abstract base class representing the interface for a record manager. |
Functionality to create graph index. |
|
Wrapper around a vectorstore for easy access. |
|
Logic for creating indexes. |
Functions¶
langchain.llms
¶
LLM classes provide access to the large language model (LLM) APIs and services.
Class hierarchy:
BaseLanguageModel --> BaseLLM --> LLM --> <name> # Examples: AI21, HuggingFaceHub, OpenAI
Main helpers:
LLMResult, PromptValue,
CallbackManagerForLLMRun, AsyncCallbackManagerForLLMRun,
CallbackManager, AsyncCallbackManager,
AIMessage, BaseMessage
Classes¶
AI21 large language models. |
|
Parameters for AI21 penalty data. |
|
Aleph Alpha large language models. |
|
Amazon API Gateway to access LLM models hosted on AWS. |
|
Adapter to prepare the inputs from Langchain to a format that LLM model expects. |
|
Anthropic large language models. |
|
Anyscale large language models. |
|
Arcee's Domain Adapted Language Models (DALMs). |
|
Aviary hosted models. |
|
|
Aviary backend. |
AzureML Managed Endpoint client. |
|
Azure ML Online Endpoint models. |
|
Transform request and response of AzureML endpoint to match with required schema. |
|
Content handler for the Dolly-v2-12b model |
|
Content handler for GPT2 |
|
Content handler for LLMs from the HuggingFace catalog. |
|
Content formatter for LLaMa |
|
Deprecated: Kept for backwards compatibility |
|
Baidu Qianfan hosted open source or customized models. |
|
Banana large language models. |
|
Baseten models. |
|
Beam API for gpt2 large language model. |
|
Bedrock models. |
|
Base class for Bedrock models. |
|
Adapter class to prepare the inputs from Langchain to a format that LLM model expects. |
|
NIBittensor LLMs |
|
CerebriumAI large language models. |
|
ChatGLM LLM service. |
|
Clarifai large language models. |
|
Base class for Cohere models. |
|
Cohere large language models. |
|
C Transformers LLM models. |
|
CTranslate2 language model. |
|
Databricks serving endpoint or a cluster driver proxy app for LLM. |
|
DeepInfra models. |
|
Neural Magic DeepSparse LLM interface. |
|
Wrapper around edenai models. |
|
Fake LLM for testing purposes. |
|
Fake streaming list LLM for testing purposes. |
|
Fireworks models. |
|
ForefrontAI large language models. |
|
GigaChat large language models API. |
|
Google PaLM models. |
|
GooseAI large language models. |
|
GPT4All language models. |
|
Gradient.ai LLM Endpoints. |
|
Train result. |
|
HuggingFace Endpoint models. |
|
HuggingFaceHub models. |
|
HuggingFace Pipeline API. |
|
|
HuggingFace text generation API. |
It returns user input as the response. |
|
Javelin AI Gateway LLMs. |
|
Parameters for the Javelin AI Gateway LLM. |
|
Kobold API language model. |
|
llama.cpp model. |
|
HazyResearch's Manifest library. |
|
Wrapper around Minimax large language models. |
|
Common parameters for Minimax large language models. |
|
Wrapper around completions LLMs in the MLflow AI Gateway. |
|
Parameters for the MLflow AI Gateway LLM. |
|
Modal large language models. |
|
MosaicML LLM service. |
|
NLPCloud large language models. |
|
OctoAI LLM Endpoints. |
|
Ollama locally runs large language models. |
|
An LLM wrapper that uses OpaquePrompts to sanitize prompts. |
|
Azure-specific OpenAI large language models. |
|
Base OpenAI large language model class. |
|
OpenAI large language models. |
|
OpenAI Chat large language models. |
|
Parameters for identifying a model as a typed dict. |
|
OpenLLM, supporting both in-process model instance and remote OpenLLM servers. |
|
OpenLM models. |
|
Langchain LLM class to help to access eass llm service. |
|
Petals Bloom models. |
|
PipelineAI large language models. |
|
Use your Predibase models with Langchain. |
|
Prediction Guard large language models. |
|
PromptLayer OpenAI large language models. |
|
Wrapper around OpenAI large language models. |
|
Replicate models. |
|
RWKV language models. |
|
A handler class to transform input from LLM to a format that SageMaker endpoint expects. |
|
Content handler for LLM class. |
|
A helper class for parsing the byte stream input. |
|
Sagemaker Inference Endpoint models. |
|
Model inference on self-hosted remote hardware. |
|
HuggingFace Pipeline API to run on self-hosted remote hardware. |
|
StochasticAI large language models. |
|
Nebula Service models. |
|
text-generation-webui models. |
|
Wrapper around Titan Takeoff APIs. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Wrapper around Together AI models. |
|
Tongyi Qwen large language models. |
|
Google Vertex AI large language models. |
|
Large language models served from Vertex AI Model Garden. |
|
VLLM language model. |
|
vLLM OpenAI-compatible API client |
|
Writer large language models. |
|
Wrapper for accessing Xinference's large-scale model inference service. |
|
Yandex large language models. |
Functions¶
|
Create the LLMResult from the choices and prompts. |
|
Update token usage. |
|
Get completions from Aviary models. |
List available models |
|
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
Gets the default Databricks personal access token. |
|
Gets the default Databricks workspace hostname. |
|
Gets the notebook REPL context if running inside a Databricks notebook. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call for streaming. |
|
|
Use tenacity to retry the completion call. |
Use tenacity to retry the completion call. |
|
|
Use tenacity to retry the completion call. |
Remove trailing slash and /api from url if present. |
|
|
Load LLM from file. |
Load LLM from Config Dict. |
|
|
Use tenacity to retry the async completion call. |
|
Use tenacity to retry the completion call. |
|
Update token usage. |
Use tenacity to retry the completion call. |
|
|
Generate text from the model. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Cut off the text as soon as any stop words occur. |
|
Use tenacity to retry the completion call. |
|
Use tenacity to retry the completion call. |
|
Returns True if the model name is a Codey model. |
Use tenacity to retry the completion call. |
langchain.memory
¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> BaseChatMemory --> <name>Memory # Examples: ZepMemory, MotorheadMemory
Main helpers:
BaseChatMessageHistory
Chat Message History stores the chat message history in different stores.
Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
Buffer for storing conversation memory. |
|
Buffer for storing conversation memory. |
|
Buffer for storing conversation memory inside a limited size window. |
|
Abstract base class for chat memory. |
|
|
Chat message history that stores history in Cassandra. |
|
Chat message history backed by Azure CosmosDB. |
|
Chat message history that stores history in AWS DynamoDB. |
|
Chat message history that stores history in Elasticsearch. |
|
Chat message history that stores history in a local file. |
|
Chat message history backed by Google Firestore. |
In memory implementation of chat message history. |
|
|
Chat message history cache that uses Momento as a backend. |
|
Chat message history that stores history in MongoDB. |
|
Chat message history stored in a Neo4j database. |
|
Chat message history stored in a Postgres database. |
|
Chat message history stored in a Redis database. |
|
Uses Rockset to store chat messages. |
|
Chat message history stored in a SingleStoreDB database. |
The class responsible for converting BaseMessage to your SQLAlchemy model. |
|
|
The default message converter for SQLChatMessageHistory. |
|
Chat message history stored in an SQL database. |
|
Chat message history that stores messages in Streamlit session state. |
|
Chat message history stored in an Upstash Redis database. |
|
Chat message history stored in a Xata database. |
|
Chat message history that uses Zep as a backend. |
Combining multiple memories' data together. |
|
Abstract base class for Entity store. |
|
Entity extractor & summarizer memory. |
|
In-memory Entity store. |
|
Redis-backed Entity store. |
|
SQLite-backed Entity store |
|
Upstash Redis backed Entity store. |
|
Knowledge graph conversation memory. |
|
Chat message memory backed by Motorhead service. |
|
A memory wrapper that is read-only and cannot be changed. |
|
Simple memory for storing context or other information that shouldn't ever change between prompts. |
|
Conversation summarizer to chat memory. |
|
Mixin for summarizer. |
|
Buffer with summarizer for storing conversation memory. |
|
Conversation chat memory with token limit. |
|
VectorStoreRetriever-backed memory. |
|
Persist your chain history to the Zep MemoryStore. |
Functions¶
Create a message model for a given table name. |
|
|
Get the prompt input key. |
langchain.model_laboratory
¶
Experiment with different models.
Classes¶
|
Experiment with different models. |
langchain.output_parsers
¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
Parse the output of an LLM call to a boolean. |
|
Combine multiple output parsers into one. |
|
Parse the output of an LLM call to a datetime. |
|
Parse an output that is one of a set of values. |
|
Parse an output as the element of the Json object. |
|
Parse an output as the Json object. |
|
Parse an output that is one of sets of values. |
|
|
Parse an output as an attribute of a pydantic object. |
|
Parse an output as a pydantic object. |
Wraps a parser and tries to fix parsing errors. |
|
Parse the output of an LLM call to a JSON object. |
|
|
Parse an output as the element of the Json object. |
Parse an output as the Json object. |
|
Parse an output that is one of sets of values. |
|
|
Parse an output as an attribute of a pydantic object. |
|
Parse an output as a pydantic object. |
Parse tools from OpenAI response. |
|
Parse tools from OpenAI response. |
|
Parse tools from OpenAI response. |
|
Parse an output using a pydantic model. |
|
Parse the output of an LLM call using Guardrails. |
|
Parse the output of an LLM call using a regex. |
|
Parse the output of an LLM call into a Dictionary using a regex. |
|
Wraps a parser and tries to fix parsing errors. |
|
Wraps a parser and tries to fix parsing errors. |
|
A schema for a response from a structured output parser. |
|
Parse the output of an LLM call to a structured output. |
|
Parse an output using xml format. |
Functions¶
Parse a JSON string from a Markdown string and check that it contains the expected keys. |
|
Parse a JSON string from a Markdown string. |
|
Parse a JSON string that may be missing closing braces. |
|
Load an output parser. |
langchain.prompts
¶
Prompt is the input to the model.
Prompt is often constructed from multiple components. Prompt classes and functions make constructing
and working with prompts easy.
Class hierarchy:
BasePromptTemplate --> PipelinePromptTemplate
StringPromptTemplate --> PromptTemplate
FewShotPromptTemplate
FewShotPromptWithTemplates
BaseChatPromptTemplate --> AutoGPTPrompt
ChatPromptTemplate --> AgentScratchPadChatPromptTemplate
BaseMessagePromptTemplate --> MessagesPlaceholder
BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate
HumanMessagePromptTemplate
AIMessagePromptTemplate
SystemMessagePromptTemplate
PromptValue --> StringPromptValue
ChatPromptValue
Classes¶
|
Select and order examples based on ngram overlap score (sentence_bleu score). |
Functions¶
|
Compute ngram overlap score of source and example as sentence_bleu score. |
langchain.retrievers
¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
Document retriever for Arcee's Domain Adapted Language Models (DALMs). |
|
Arxiv retriever. |
|
|
Azure Cognitive Search service retriever. |
BM25 retriever without Elasticsearch. |
|
Chaindesk API retriever. |
|
ChatGPT plugin retriever. |
|
Cohere Chat API with RAG. |
|
|
Retriever that wraps a base retriever and compresses the results. |
Databerry API retriever. |
|
DocArray Document Indices retriever. |
|
|
Enumerator of the types of search to perform. |
Base class for document compressors. |
|
|
Document compressor that uses a pipeline of Transformers. |
|
Document compressor that uses an LLM chain to extract the relevant parts of documents. |
|
Parse outputs that could return a null string of some sort. |
Filter that drops documents that aren't relevant to the query. |
|
Document compressor that uses Cohere Rerank API. |
|
|
Document compressor that uses embeddings to drop documents unrelated to the query. |
Elasticsearch retriever that uses BM25. |
|
Embedchain retriever. |
|
Retriever that ensembles the multiple retrievers. |
|
|
A retriever based on Document AI Warehouse. |
|
Google Vertex Search API retriever alias for backwards compatibility. |
|
Google Vertex AI Search retriever for multi-turn conversations. |
|
Google Vertex AI Search retriever. |
Retriever for Kay.ai datasets. |
|
Additional result attribute. |
|
Value of an additional result attribute. |
|
Amazon Kendra Index retriever. |
|
Document attribute. |
|
Value of a document attribute. |
|
Information that highlights the keywords in the excerpt. |
|
Amazon Kendra Query API search result. |
|
Query API result item. |
|
Base class of a result item. |
|
Amazon Kendra Retrieve API search result. |
|
Retrieve API result item. |
|
Text with highlights. |
|
KNN retriever. |
|
LlamaIndex graph data structure retriever. |
|
LlamaIndex retriever. |
|
Retriever that merges the results of multiple retrievers. |
|
Metal API retriever. |
|
Milvus API retriever. |
|
List of lines. |
|
Output parser for a list of lines. |
|
Given a query, use an LLM to write a set of queries. |
|
Retrieve from a set of multiple embeddings for the same document. |
|
|
Retrieve small chunks then retrieve their parent documents. |
|
Pinecone Hybrid Search retriever. |
PubMed API retriever. |
|
Given a query, use an LLM to re-phrase it. |
|
LangChain API retriever. |
|
Retriever that uses a vector store and an LLM to generate the vector store queries. |
|
Translate Chroma internal query language elements to valid filters. |
|
Logic for converting internal query language elements to valid filters. |
|
Translate DeepLake internal query language elements to valid filters. |
|
|
Translate Elasticsearch internal query language elements to valid filters. |
Translate Milvus internal query language elements to valid filters. |
|
Translate MyScale internal query language elements to valid filters. |
|
Translate OpenSearch internal query domain-specific language elements to valid filters. |
|
Translate Pinecone internal query language elements to valid filters. |
|
Translate Qdrant internal query language elements to valid filters. |
|
Translate |
|
Translate Langchain filters to Supabase PostgREST filters. |
|
|
Translate the internal query language elements to valid filters. |
Translate Vectara internal query language elements to valid filters. |
|
Translate Weaviate internal query language elements to valid filters. |
|
SVM retriever. |
|
Search depth as enumerator. |
|
Tavily Search API retriever. |
|
TF-IDF retriever. |
|
|
Retriever that combines embedding similarity with recency in retrieving values. |
Vespa retriever. |
|
|
Weaviate hybrid search retriever. |
List of questions. |
|
Output parser for a list of numbered questions. |
|
Search queries to research for the user's goal. |
|
Google Search API retriever. |
|
Wikipedia API retriever. |
|
You retriever that uses You.com's search API. |
|
|
Which documents to search. |
|
Enumerator of the types of search to perform. |
Zep MemoryStore Retriever. |
|
Zilliz API retriever. |
Functions¶
|
Return the compression chain input. |
|
Return the compression chain input. |
|
Clean an excerpt from Kendra. |
Combine a ResultItem title and excerpt into a single string. |
|
|
Create an index of embeddings for a list of contexts. |
|
Deprecated MilvusRetreiver. |
Create an index from a list of contexts. |
|
Hash a text using SHA256. |
|
Check if a string can be cast to a float. |
|
Convert a value to a string and add double quotes if it is a string. |
|
Convert a value to a string and add single quotes if it is a string. |
|
|
Create an index of embeddings for a list of contexts. |
|
Deprecated ZillizRetreiver. |
langchain.runnables
¶
Classes¶
An instance of a runnable stored in the LangChain Hub. |
|
A function description for ChatOpenAI |
|
A runnable that routes to the selected function. |
langchain.smith
¶
LangSmith utilities.
This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.
Evaluation
LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators.
An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>
:
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Primary Functions
arun_on_dataset
: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.run_on_dataset
: Function to evaluate a chain, agent, or other LangChain component over a dataset.RunEvalConfig
: Class representing the configuration for running evaluation. You can select evaluators byEvaluatorType
or config, or you can pass in custom_evaluators
Classes¶
Configuration for a given run evaluator. |
|
Configuration for a run evaluation. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
A simple progress bar for the console. |
|
Your architecture raised an error. |
|
Raised when the input format is invalid. |
|
A dictionary of the results of a single test run. |
|
Extract items to evaluate from the run object from a chain. |
|
Extract items to evaluate from the run object. |
|
Map an example, or row in the dataset, to the inputs of an evaluation. |
|
|
Evaluate Run and optional examples. |
Extract items to evaluate from the run object. |
|
Map an input to the tool. |
Functions¶
Generate a random name. |
|
Run the Chain or language model on a dataset and store traces to the specified project name. |
|
Run the Chain or language model on a dataset and store traces to the specified project name. |
langchain.storage
¶
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
Wraps a store with key and value encoders/decoders. |
|
Raised when a key is invalid; e.g., uses incorrect characters. |
|
|
BaseStore interface that works on the local file system. |
In-memory implementation of the BaseStore using a dictionary. |
|
|
BaseStore implementation using Redis as the underlying store. |
|
BaseStore implementation using Upstash Redis as the underlying store. |
langchain.text_splitter
¶
Text Splitters are classes for splitting text.
Class hierarchy:
BaseDocumentTransformer --> TextSplitter --> <name>TextSplitter # Example: CharacterTextSplitter
RecursiveCharacterTextSplitter --> <name>TextSplitter
Note: MarkdownHeaderTextSplitter and **HTMLHeaderTextSplitter do not derive from TextSplitter.
Main helpers:
Document, Tokenizer, Language, LineType, HeaderType
Classes¶
Splitting text that looks at characters. |
|
Element type as typed dict. |
|
|
Splitting HTML files based on specified headers. |
Header type as typed dict. |
|
|
Enum of the programming languages. |
|
Attempts to split the text along Latex-formatted layout elements. |
Line type as typed dict. |
|
Splitting markdown files based on specified headers. |
|
|
Attempts to split the text along Markdown-formatted headings. |
|
Splitting text using NLTK package. |
|
Attempts to split the text along Python syntax. |
Splitting text by recursively look at characters. |
|
Splitting text to tokens using sentence model tokenizer. |
|
|
Splitting text using Spacy package. |
|
Interface for splitting text into chunks. |
Splitting text to tokens using model tokenizer. |
|
|
Tokenizer data class. |
Functions¶
|
Split incoming text and return chunks using tokenizer. |
langchain.tools
¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right tool for the job.
Class hierarchy:
ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
Tool for app operations. |
|
Type of app operation as enumerator. |
|
Schema for app operations. |
|
Base class for the AINetwork tools. |
|
|
Type of operation as enumerator. |
Tool for owner operations. |
|
Schema for owner operations. |
|
Tool for owner operations. |
|
Schema for owner operations. |
|
Tool for transfer operations. |
|
Schema for transfer operations. |
|
Tool for value operations. |
|
Schema for value operations. |
|
Base Tool for Amadeus. |
|
Tool for finding the closest airport to a particular location. |
|
Schema for the AmadeusClosestAirport tool. |
|
Tool for searching for a single flight between two airports. |
|
Schema for the AmadeusFlightSearch tool. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Tool that searches the Arxiv API. |
|
|
Tool that queries the Azure Cognitive Services Form Recognizer API. |
|
Tool that queries the Azure Cognitive Services Image Analysis API. |
|
Tool that queries the Azure Cognitive Services Speech2Text API. |
|
Tool that queries the Azure Cognitive Services Text2Speech API. |
|
Tool that queries the Azure Cognitive Services Text Analytics for Health API. |
Tool for evaluating python code in a sandbox environment. |
|
Arguments for the BearlyInterpreterTool. |
|
Information about a file to be uploaded. |
|
Tool that queries the Bing Search API and gets back json. |
|
Tool that queries the Bing search API. |
|
Tool that queries the BraveSearch. |
|
Tool that queries the Clickup API. |
|
Tool that queries the DataForSeo Google Search API and get back json. |
|
Tool that queries the DataForSeo Google search API. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Tool that queries the DuckDuckGo search API and gets back json. |
|
Tool that queries the DuckDuckGo search API. |
|
Tool for running python code in a sandboxed environment for data analysis. |
|
Arguments for the E2BDataAnalysisTool. |
|
Description of the uploaded path with its remote path. |
|
Methods in this class recursively traverse an AST and output source code for the abstract syntax; original formatting is disregarded. |
|
Tool that queries the Eden AI Speech To Text API. |
|
Tool that queries the Eden AI Text to speech API. |
|
the base tool for all the EdenAI Tools . |
|
Tool that queries the Eden AI Explicit image detection. |
|
|
Tool that queries the Eden AI Object detection API. |
Tool that queries the Eden AI Identity parsing API. |
|
Tool that queries the Eden AI Invoice parsing API. |
|
Tool that queries the Eden AI Explicit text detection. |
|
Models available for Eleven Labs Text2Speech. |
|
Models available for Eleven Labs Text2Speech. |
|
Tool that queries the Eleven Labs Text2Speech API. |
|
Tool that copies a file. |
|
Input for CopyFileTool. |
|
Tool that deletes a file. |
|
Input for DeleteFileTool. |
|
Input for FileSearchTool. |
|
Tool that searches for files in a subdirectory that match a regex pattern. |
|
Input for ListDirectoryTool. |
|
Tool that lists files and directories in a specified folder. |
|
Input for MoveFileTool. |
|
Tool that moves a file. |
|
Input for ReadFileTool. |
|
Tool that reads a file. |
|
Mixin for file system tools. |
|
Error for paths outside the root directory. |
|
Input for WriteFileTool. |
|
Tool that writes a file to disk. |
|
Tool for interacting with the GitHub API. |
|
Tool for interacting with the GitLab API. |
|
Base class for Gmail tools. |
|
Input for CreateDraftTool. |
|
Tool that creates a draft email for Gmail. |
|
Tool that gets a message by ID from Gmail. |
|
Input for GetMessageTool. |
|
Input for GetMessageTool. |
|
Tool that gets a thread by ID from Gmail. |
|
Tool that searches for messages or threads in Gmail. |
|
|
Enumerator of Resources to search. |
Input for SearchGmailTool. |
|
Tool that sends a message to Gmail. |
|
Input for SendMessageTool. |
|
Tool that adds the capability to query using the Golden API and get back JSON. |
|
Tool that queries the Google Cloud Text to Speech API. |
|
Input for GooglePlacesTool. |
|
Tool that queries the Google places API. |
|
Tool that queries the Google search API. |
|
Tool that queries the Google Search API and gets back json. |
|
Tool that queries the Google search API. |
|
Tool that queries the Serper.dev Google Search API and get back json. |
|
Tool that queries the Serper.dev Google search API. |
|
Base tool for querying a GraphQL API. |
|
Tool that asks user for input. |
|
IFTTT Webhook. |
|
Tool that queries the Atlassian Jira API. |
|
Tool for getting a value in a JSON spec. |
|
Tool for listing keys in a JSON spec. |
|
Base class for JSON spec. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
|
|
Tool that queries the Metaphor Search API and gets back json. |
|
Input for UpdateSessionTool. |
|
Tool that closes an existing Multion Browser Window with provided fields. |
|
Input for CreateSessionTool. |
|
Tool that creates a new Multion Browser Window with provided fields. |
|
Tool that updates an existing Multion Browser Window with provided fields. |
|
Input for UpdateSessionTool. |
|
Input for Nuclia Understanding API. |
|
Tool to process files with the Nuclia Understanding API. |
|
Base class for the Office 365 tools. |
|
|
Input for SendMessageTool. |
Tool for creating a draft email in Office 365. |
|
Class for searching calendar events in Office 365 |
|
Input for SearchEmails Tool. |
|
Class for searching email messages in Office 365 |
|
Input for SearchEmails Tool. |
|
Tool for sending calendar events in Office 365. |
|
Input for CreateEvent Tool. |
|
Tool for sending an email in Office 365. |
|
Input for SendMessageTool. |
|
A model for a single API operation. |
|
A model for a property in the query, path, header, or cookie params. |
|
Base model for an API property. |
|
The location of the property. |
|
A model for a request body. |
|
A model for a request body property. |
|
Tool that queries the OpenWeatherMap API. |
|
Base class for browser tools. |
|
Tool for clicking on an element with the given CSS selector. |
|
Input for ClickTool. |
|
Tool for getting the URL of the current webpage. |
|
Extract all hyperlinks on the page. |
|
|
Input for ExtractHyperlinksTool. |
Tool for extracting all the text on the current webpage. |
|
Tool for getting elements in the current web page matching a CSS selector. |
|
Input for GetElementsTool. |
|
Tool for navigating a browser to a URL. |
|
Input for NavigateToolInput. |
|
Navigate back to the previous page in the browser history. |
|
AI Plugin Definition. |
|
Tool for getting the OpenAPI spec for an AI Plugin. |
|
Schema for AIPluginTool. |
|
API Configuration. |
|
Tool for getting metadata about a PowerBI Dataset. |
|
Tool for getting tables names. |
|
Tool for querying a Power BI Dataset. |
|
Tool that searches the PubMed API. |
|
Base class for requests tools. |
|
Tool for making a DELETE request to an API endpoint. |
|
Tool for making a GET request to an API endpoint. |
|
Tool for making a PATCH request to an API endpoint. |
|
Tool for making a POST request to an API endpoint. |
|
Tool for making a PUT request to an API endpoint. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Input for SceneXplain. |
|
Tool that explains images. |
|
Tool that queries the SearchApi.io search API and returns JSON. |
|
Tool that queries the SearchApi.io search API. |
|
Tool that queries a Searx instance and gets back json. |
|
Tool that queries a Searx instance. |
|
Commands for the Bash Shell tool. |
|
Tool to run shell commands. |
|
Input for CopyFileTool. |
|
Tool that adds the capability to sleep. |
|
Base tool for interacting with Spark SQL. |
|
Tool for getting metadata about a Spark SQL. |
|
Tool for getting tables names. |
|
Use an LLM to check if a query is correct. |
|
Tool for querying a Spark SQL. |
|
Base tool for interacting with a SQL database. |
|
Tool for getting metadata about a SQL database. |
|
Tool for getting tables names. |
|
Use an LLM to check if a query is correct. |
|
Tool for querying a SQL database. |
|
Supported Image Models for generation. |
|
|
Tool used to generate images from a text-prompt. |
Tool that queries the Tavily Search API and gets back an answer. |
|
Create a new model by parsing and validating input data from keyword arguments. |
|
Tool that queries the Tavily Search API and gets back json. |
|
Base class for tools that use a VectorStore. |
|
Tool for the VectorDBQA chain. |
|
Tool for the VectorDBQAWithSources chain. |
|
Tool that searches the Wikipedia API. |
|
Tool that queries using the Wolfram Alpha SDK. |
|
Tool that searches financial news on Yahoo Finance. |
|
Tool that queries YouTube. |
|
Returns a list of all exposed (enabled) actions associated with |
|
Executes an action that is identified by action_id, must be exposed |
Functions¶
|
Authenticate using the AIN Blockchain |
Authenticate using the Amadeus API |
|
|
Detect if the file is local or remote. |
|
Download audio from url to local. |
Convert a file to base64. |
|
|
Get the first n lines of a file. |
|
Strip markdown code from a string. |
Deprecated. |
|
Add print statement to the last line if it's missing. |
|
Call f on each item in seq, calling inter() in between. |
|
|
Resolve a relative path, raising an error if not within the root directory. |
Check if path is relative to root. |
|
Build a Gmail service. |
|
Clean email body. |
|
Get credentials. |
|
Import google libraries. |
|
Import googleapiclient.discovery.build function. |
|
Import InstalledAppFlow class. |
|
Tool for asking the user for input. |
|
Authenticate using the Microsoft Grah API |
|
Clean body of a message or event. |
|
Lazy import playwright browsers. |
|
Asynchronously get the current page of the browser. |
|
|
Create an async playwright browser. |
|
Create a playwright browser. |
Get the current page of the browser. |
|
Run an async coroutine. |
|
Convert the yaml or json serialized spec to a dict. |
|
Format tool into the OpenAI function API. |
|
Format tool into the OpenAI function API. |
|
Render the tool name and description in plain text. |
|
Render the tool name, description, and args in plain text. |
|
Create a tool to do retrieval of documents. |
|
|
Upload a block to a signed URL and return the public URL. |
langchain.tools.render
¶
Different methods for rendering Tools to be passed to LLMs.
Depending on the LLM you are using and the prompting strategy you are using, you may want Tools to be rendered in a different way. This module contains various ways to render tools.
Functions¶
Format tool into the OpenAI function API. |
|
Format tool into the OpenAI function API. |
|
Render the tool name and description in plain text. |
|
Render the tool name, description, and args in plain text. |
langchain.utilities
¶
Utilities are the integrations with third-part systems and packages.
Other LangChain classes use Utilities to interact with third-part systems and packages.
Classes¶
Wrapper for AlphaVantage API for Currency Exchange Rate. |
|
Wrapper around Apify. |
|
Arcee document. |
|
Adapter for Arcee documents |
|
Source of an Arcee document. |
|
|
Routes available for the Arcee API as enumerator. |
|
Wrapper for Arcee API. |
Filters available for a DALM retrieval and generation. |
|
|
Filter types available for a DALM retrieval as enumerator. |
Wrapper around ArxivAPI. |
|
Wrapper for AWS Lambda SDK. |
|
Wrapper around bibtexparser. |
|
Wrapper for Bing Search API. |
|
Wrapper around the Brave search engine. |
|
|
Component class for a list. |
Wrapper for Clickup API. |
|
Base class for all components. |
|
|
Component class for a member. |
|
Component class for a space. |
|
Class for a task. |
|
Component class for a team. |
Wrapper for OpenAI's DALL-E Image Generator. |
|
Wrapper around the DataForSeo API. |
|
Wrapper for DuckDuckGo Search API. |
|
Wrapper for GitHub API. |
|
Wrapper for GitLab API. |
|
Wrapper for Golden. |
|
Wrapper around Google Places API. |
|
Wrapper for Google Scholar API |
|
Wrapper for Google Search API. |
|
Wrapper around the Serper.dev Google Search API. |
|
Wrapper around GraphQL API. |
|
Wrapper for Jira API. |
|
Interface for querying Alibaba Cloud MaxCompute tables. |
|
Wrapper for Metaphor Search API. |
|
|
Enumerator of the HTTP verbs. |
OpenAPI Model that removes mis-formatted parts of the spec. |
|
Wrapper for OpenWeatherMap API using PyOWM. |
|
Portkey configuration. |
|
Create PowerBI engine from dataset ID and credential or token. |
|
Wrapper around PubMed API. |
|
Simulates a standalone Python REPL. |
|
|
Escape punctuation within an input string. |
Wrapper around requests to handle auth and async. |
|
alias of |
|
Lightweight wrapper around requests library. |
|
Wrapper for SceneXplain API. |
|
Wrapper around SearchApi API. |
|
Dict like wrapper around search api results. |
|
Wrapper for Searx API. |
|
Context manager to hide prints. |
|
Wrapper around SerpAPI. |
|
|
SparkSQL is a utility class for interacting with Spark SQL. |
|
SQLAlchemy wrapper around a database. |
Wrapper for Tavily Search API. |
|
Access to the TensorFlow Datasets. |
|
Messaging Client using Twilio. |
|
Wrapper around WikipediaAPI. |
|
Wrapper for Wolfram Alpha. |
|
Wrapper for Zapier NLA. |
Functions¶
Get the number of tokens in a string of text. |
|
Get the token ids for a string of text. |
|
|
Extract elements from a dictionary. |
|
Fetch data from a URL. |
|
Fetch the first id from a dictionary. |
|
Fetch the folder id. |
|
Fetch the list id. |
|
Fetch the space id. |
|
Fetch the team id. |
|
Attempts to parse a JSON string and return the parsed object. |
Parse a dictionary by creating a component and then turning it back into a dictionary. |
|
Restore the original sensitive data from the sanitized text. |
|
Sanitize input string or dict of strings by replacing sensitive data with placeholders. |
|
Add single quotes around table names that contain spaces. |
|
|
Converts a JSON object to a markdown table. |
Check if the correct Redis modules are installed. |
|
|
Get a redis client from the connection url given. |
|
Truncate a string to a certain number of words, based on the max string length. |
|
Returns a custom user agent header. |
|
Init vertexai. |
Raise ImportError related to Vertex SDK being not available. |
langchain.utils
¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Classes¶
Representation of a callable function to the Ernie API. |
|
Representation of a callable function to the Ernie API. |
|
Representation of a callable function to the OpenAI API. |
|
Representation of a callable function to the OpenAI API. |
Functions¶
|
Get a value from a dictionary or an environment variable. |
|
Get a value from a dictionary or an environment variable. |
|
Converts a Pydantic model to a function description for the Ernie API. |
Converts a Pydantic model to a function description for the Ernie API. |
|
|
Extract all links from a raw html string and convert into absolute paths. |
|
Extract all links from a raw html string. |
|
Try to substitute $refs in JSON Schema. |
Row-wise cosine similarity between two equal-width matrices. |
|
|
Row-wise cosine similarity with optional top-k and score threshold filtering. |
|
Converts a Pydantic model to a function description for the OpenAI API. |
Converts a Pydantic model to a function description for the OpenAI API. |
|
|
Convert a list to a comma-separated string. |
Stringify a dictionary. |
|
Stringify a value. |
langchain.vectorstores
¶
Vector store stores embedded data and performs vector search.
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then query the store and retrieve the data that are ‘most similar’ to the embedded query.
Class hierarchy:
VectorStore --> <name> # Examples: Annoy, FAISS, Milvus
BaseRetriever --> VectorStoreRetriever --> <name>Retriever # Example: VespaRetriever
Main helpers:
Embeddings, Document
Classes¶
|
Alibaba Cloud OpenSearch vector store. |
|
Alibaba Cloud Opensearch` client configuration. |
|
AnalyticDB (distributed PostgreSQL) vector store. |
|
Annoy vector store. |
|
Wrapper around DataStax Astra DB for vector-store workloads. |
|
Atlas vector store. |
|
AwaDB vector store. |
Azure Cosmos DB for MongoDB vCore vector store. |
|
Cosmos DB Similarity Type as enumerator. |
|
|
Azure Cognitive Search vector store. |
Retriever that uses Azure Cognitive Search. |
|
|
|
Baidu Elasticsearch vector store. |
|
|
Wrapper around Apache Cassandra(R) for vector-store workloads. |
|
ChromaDB vector store. |
|
Clarifai AI vector store. |
|
ClickHouse VectorSearch vector store. |
ClickHouse client configuration. |
|
DashVector vector store. |
|
Activeloop Deep Lake vector store. |
|
|
Dingo vector store. |
Base class for DocArray based vector stores. |
|
HnswLib storage using DocArray package. |
|
In-memory DocArray storage for exact search. |
|
[Deprecated] [DEPRECATED] Elasticsearch with k-nearest neighbor search (k-NN) vector store. |
|
ElasticVectorSearch uses the brute force method of searching on vectors. |
|
Approximate retrieval strategy using the HNSW algorithm. |
|
Base class for Elasticsearch retrieval strategies. |
|
Elasticsearch vector store. |
|
Exact retrieval strategy using the script_score query. |
|
Sparse retrieval strategy using the text_expansion processor. |
|
|
Wrapper around Epsilla vector database. |
|
Meta Faiss vector store. |
|
Hippo vector store. |
|
Hologres API vector store. |
Hologres API wrapper. |
|
|
LanceDB vector store. |
Implementation of Vector Store using LLMRails. |
|
Retriever for LLMRails. |
|
|
Marqo vector store. |
Google Vertex AI Matching Engine vector store. |
|
|
Meilisearch vector store. |
|
Milvus vector store. |
Momento Vector Index (MVI) vector store. |
|
MongoDB Atlas Vector Search vector store. |
|
|
MyScale vector store. |
MyScale client configuration. |
|
MyScale vector store without metadata column |
|
|
Neo4j vector index. |
Enumerator of the Distance strategies. |
|
|
NucliaDB vector store. |
|
Amazon OpenSearch Vector Engine vector store. |
|
Base model for all SQL stores. |
Collection store. |
|
|
Embedding store. |
|
Postgres with the pg_embedding extension as a vector store. |
Result from a query. |
|
|
|
|
Base model for the SQL stores. |
Enumerator of the Distance strategies. |
|
|
Postgres/PGVector vector store. |
|
Pinecone vector store. |
|
Qdrant vector store. |
Qdrant related exceptions. |
|
|
Redis vector database. |
Retriever for Redis VectorStore. |
|
Collection of RedisFilterFields. |
|
A logical expression of RedisFilterFields. |
|
Base class for RedisFilterFields. |
|
RedisFilterOperator enumerator is used to create RedisFilterExpressions. |
|
A RedisFilterField representing a numeric field in a Redis index. |
|
A RedisFilterField representing a tag in a Redis index. |
|
A RedisFilterField representing a text field in a Redis index. |
|
Schema for flat vector fields in Redis. |
|
Schema for HNSW vector fields in Redis. |
|
Schema for numeric fields in Redis. |
|
Distance metrics for Redis vector fields. |
|
Base class for Redis fields. |
|
Schema for Redis index. |
|
Base class for Redis vector fields. |
|
Schema for tag fields in Redis. |
|
Schema for text fields in Redis. |
|
|
Rockset vector store. |
|
ScaNN vector store. |
|
SemaDB vector store. |
SingleStore DB vector store. |
|
|
Base class for serializing data. |
|
Serializes data in binary json using the bson python package. |
|
Serializes data in json using the json package from python standard library. |
Serializes data in Apache Parquet format using the pyarrow package. |
|
Simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation. |
|
Exception raised by SKLearnVectorStore. |
|
|
Wrapper around SQLite with vss extension as a vector database. |
|
StarRocks vector store. |
StarRocks client configuration. |
|
Supabase Postgres vector store. |
|
|
Tair vector store. |
Tencent vector DB Connection params. |
|
Tencent vector DB Index params. |
|
Initialize wrapper around the tencent vector database. |
|
|
Tigris vector store. |
|
Wrapper around TileDB vector database. |
VectorStore implementation using the timescale vector client to store vectors in Postgres. |
|
|
Typesense vector store. |
|
USearch vector store. |
|
Enumerator of the Distance strategies for calculating distances between vectors. |
|
Wrapper around Vald vector database. |
|
Initialize vearch vector store flag 1 for cluster,0 for standalone |
|
Vectara API vector store. |
Retriever class for Vectara. |
|
|
Vespa vector store. |
|
Weaviate vector store. |
|
Xata vector store. |
|
Configuration for a Zep Collection. |
|
Zep vector store. |
|
Zilliz vector store. |
Functions¶
|
Create metadata from fields. |
Import annoy if available, otherwise raise error. |
|
|
Check if a string contains multiple substrings. |
Import faiss if available, otherwise raise error. |
|
|
Check if a string contains multiple substrings. |
Check if the values are not None or empty string |
|
Sort first element to match the index_name if exists |
|
Decorator to call the synchronous method of the class if the async method is not implemented. |
|
Check if Redis index exists. |
|
Decorator to check for misuse of equality operators. |
|
Reads in the index schema from a dict or yaml file. |
|
Import scann if available, otherwise raise error. |
|
Normalize vectors to unit length. |
|
Print a debug message if DEBUG is True. |
|
Get a named result from a query. |
|
|
Check if a string has multiple substrings. |
Import tiledb-vector-search if available, otherwise raise error. |
|
|
|
Import usearch if available, otherwise raise error. |
|
Filter out metadata types that are not supported for a vector store. |
|
Calculate maximal marginal relevance. |