langchain_experimental
0.2.0rc1¶
langchain_experimental.agents
¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
Agents select and use Tools and Toolkits for actions.
Functions¶
Create pandas dataframe agent by loading csv to a dataframe. |
|
|
Construct a Pandas agent from an LLM and dataframe(s). |
Construct a python agent from an LLM and tool. |
|
|
Construct a Spark agent from an LLM and dataframe. |
|
Construct a xorbits agent from an LLM and dataframe. |
langchain_experimental.autonomous_agents
¶
Autonomous agents in the Langchain experimental package include [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT), [BabyAGI](https://github.com/yoheinakajima/babyagi), and [HuggingGPT](https://arxiv.org/abs/2303.17580) agents that interact with language models autonomously.
These agents have specific functionalities like memory management, task creation, execution chains, and response generation.
They differ from ordinary agents by their autonomous decision-making capabilities, memory handling, and specialized functionalities for tasks and response.
Classes¶
Agent for interacting with AutoGPT. |
|
Memory for AutoGPT. |
|
Action returned by AutoGPTOutputParser. |
|
Output parser for AutoGPT. |
|
|
Base Output parser for AutoGPT. |
Prompt for AutoGPT. |
|
|
Generator of custom prompt strings. |
Controller model for the BabyAGI agent. |
|
Chain generating tasks. |
|
|
Chain to execute tasks. |
|
Chain to prioritize tasks. |
Agent for interacting with HuggingGPT. |
|
|
Chain to execute tasks. |
|
Generates a response based on the input. |
Task to be executed. |
|
|
Load tools and execute tasks. |
Base class for a planner. |
|
A plan to execute. |
|
|
Parses the output of the planning stage. |
A step in the plan. |
|
Chain to execute tasks. |
|
Planner for tasks. |
Functions¶
|
Preprocesses a string to be parsed as json. |
|
Generates a prompt string. |
|
Load the ResponseGenerator. |
|
Load the chat planner. |
langchain_experimental.chat_models
¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
Wrapper for chat LLMs. |
|
Wrapper for Llama-2-chat model. |
|
See https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format |
|
Wrapper for Orca-style models. |
|
Wrapper for Vicuna-style models. |
langchain_experimental.comprehend_moderation
¶
Comprehend Moderation is used to detect and handle Personally Identifiable Information (PII), toxicity, and prompt safety in text.
The Langchain experimental package includes the AmazonComprehendModerationChain class for the comprehend moderation tasks. It is based on Amazon Comprehend service. This class can be configured with specific moderation settings like PII labels, redaction, toxicity thresholds, and prompt safety thresholds.
See more at https://aws.amazon.com/comprehend/
Amazon Comprehend service is used by several other classes: - ComprehendToxicity class is used to check the toxicity of text prompts using
AWS Comprehend service and take actions based on the configuration
ComprehendPromptSafety class is used to validate the safety of given prompt text, raising an error if unsafe content is detected based on the specified threshold
ComprehendPII class is designed to handle Personally Identifiable Information (PII) moderation tasks, detecting and managing PII entities in text inputs
Classes¶
langchain_experimental.cpal
¶
Causal program-aided language (CPAL) is a concept implemented in LangChain as a chain for causal modeling and narrative decomposition.
CPAL improves upon the program-aided language (PAL) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
CPAL involves translating causal narratives into a stack of operations, setting hypothetical conditions for causal models, and decomposing narratives into story elements.
It allows for the creation of causal chains that define the relationships between different elements in a narrative, enabling the modeling and analysis of causal relationships within a given context.
Classes¶
Causal program-aided language (CPAL) chain implementation. |
|
Translate the causal narrative into a stack of operations. |
|
Set the hypothetical conditions for the causal model. |
|
Decompose the narrative into its story elements. |
|
Query the outcome table using SQL. |
|
|
Enum for constants used in the CPAL. |
Casual data. |
|
Entity in the story. |
|
Entity initial conditions. |
|
Intervention data of the story aka initial conditions. |
|
Narrative input as three story elements. |
|
Query data of the story. |
|
Result of the story query. |
|
Story data. |
|
System initial conditions. |
langchain_experimental.data_anonymizer
¶
Data anonymizer contains both Anonymizers and Deanonymizers. It uses the [Microsoft Presidio](https://microsoft.github.io/presidio/) library.
Anonymizers are used to replace a Personally Identifiable Information (PII) entity text with some other value by applying a certain operator (e.g. replace, mask, redact, encrypt).
Deanonymizers are used to revert the anonymization operation (e.g. to decrypt an encrypted text).
Classes¶
Base abstract class for anonymizers. |
|
Base abstract class for reversible anonymizers. |
|
|
Deanonymizer mapping. |
Anonymizer using Microsoft Presidio. |
|
Base Anonymizer using Microsoft Presidio. |
|
|
Reversible Anonymizer using Microsoft Presidio. |
Functions¶
langchain_experimental.fallacy_removal
¶
Fallacy Removal Chain runs a self-review of logical fallacies as determined by paper [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](https://arxiv.org/pdf/2212.07425.pdf). It is modeled after Constitutional AI and in the same format, but applying logical fallacies as generalized rules to remove them in output.
Classes¶
Chain for applying logical fallacy evaluations. |
|
Logical fallacy. |
langchain_experimental.generative_agents
¶
Generative Agent primitives.
Classes¶
Agent as a character with memory and innate characteristics. |
|
Memory for the generative agent. |
langchain_experimental.graph_transformers
¶
Graph Transformers transform Documents into Graph Documents.
Classes¶
Transform documents into graph documents using Diffbot NLP API. |
|
List of nodes with associated properties. |
|
Simplified schema mapping. |
|
An enumeration. |
|
Transform documents into graph-based documents using a LLM. |
|
Create a new model by parsing and validating input data from keyword arguments. |
Functions¶
Formats a string to be used as a property key. |
|
Simple model allows to limit node and/or relationship types. |
|
Map the SimpleNode to the base Node. |
|
Map the SimpleRelationship to the base Relationship. |
|
Utility function to conditionally create a field with an enum constraint. |
langchain_experimental.llm_bash
¶
LLM bash is a chain that uses LLM to interpret a prompt and executes bash code.
Classes¶
Chain that interprets a prompt and executes bash operations. |
|
|
Wrapper for starting subprocesses. |
Parser for bash output. |
langchain_experimental.llm_symbolic_math
¶
Chain that interprets a prompt and executes python code to do math.
Heavily borrowed from llm_math, uses the [SymPy](https://www.sympy.org/) package.
Classes¶
Chain that interprets a prompt and executes python code to do symbolic math. |
langchain_experimental.llms
¶
Experimental LLM classes provide access to the large language model (LLM) APIs and services.
Classes¶
[Deprecated] Chat model for interacting with Anthropic functions. |
|
Parser for the tool tags. |
|
Jsonformer wrapped LLM using HuggingFace Pipeline API. |
|
Chat model using the Llama API. |
|
LMFormatEnforcer wrapped LLM using HuggingFace Pipeline API. |
|
Function chat model that uses Ollama API. |
|
RELLM wrapped LLM using HuggingFace Pipeline API. |
Functions¶
Lazily import of the jsonformer package. |
|
Lazily import of the lmformatenforcer package. |
|
Convert a tool to an Ollama tool. |
|
|
Extract function_call from AIMessage. |
Lazily import of the rellm package. |
langchain_experimental.open_clip
¶
OpenCLIP Embeddings model.
OpenCLIP is a multimodal model that can encode text and images into a shared space.
See this paper for more details: https://arxiv.org/abs/2103.00020 and [this repository](https://github.com/mlfoundations/open_clip) for details.
Classes¶
OpenCLIP Embeddings model. |
langchain_experimental.pal_chain
¶
PAL Chain implements Program-Aided Language Models.
See the paper: https://arxiv.org/pdf/2211.10435.pdf.
This chain is vulnerable to [arbitrary code execution](https://github.com/langchain-ai/langchain/issues/5872).
Classes¶
Chain that implements Program-Aided Language Models (PAL). |
|
|
Validation for PAL generated code. |
langchain_experimental.plan_and_execute
¶
Plan-and-execute agents are planning tasks with a language model (LLM) and executing them with a separate agent.
Classes¶
Plan and execute a chain of steps. |
|
Base executor. |
|
Chain executor. |
|
Base planner. |
|
LLM planner. |
|
Planning output parser. |
|
Base step container. |
|
Container for List of steps. |
|
Plan. |
|
Plan output parser. |
|
Step. |
|
Step response. |
Functions¶
|
Load an agent executor. |
|
Load a chat planner. |
langchain_experimental.prompt_injection_identifier
¶
HuggingFace Injection Identifier is a tool that uses [HuggingFace Prompt Injection model](https://huggingface.co/deepset/deberta-v3-base-injection) to detect prompt injection attacks.
Classes¶
|
Tool that uses HuggingFace Prompt Injection model to detect prompt injection attacks. |
|
Exception raised when prompt injection attack is detected. |
langchain_experimental.recommenders
¶
Amazon Personalize primitives.
[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users.
Classes¶
Amazon Personalize Runtime wrapper for executing real-time operations. |
|
|
Chain for retrieving recommendations from Amazon Personalize, |
langchain_experimental.retrievers
¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it.
Classes¶
|
Retriever that uses Vector SQL Database. |
langchain_experimental.rl_chain
¶
RL (Reinforcement Learning) Chain leverages the Vowpal Wabbit (VW) models for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call.
[Vowpal Wabbit](https://vowpalwabbit.org/) provides fast, efficient, and flexible online machine learning techniques for reinforcement learning, supervised learning, and more.
Classes¶
Auto selection scorer. |
|
|
Abstract class to represent an embedder. |
|
Abstract class to represent an event. |
|
Abstract class to represent a policy. |
Chain that leverages the Vowpal Wabbit (VW) model as a learned policy for reinforcement learning. |
|
Abstract class to represent the selected item. |
|
Abstract class to grade the chosen selection or the response of the llm. |
|
|
Vowpal Wabbit policy. |
Metrics Tracker Average. |
|
Metrics Tracker Rolling Window. |
|
Model Repository. |
|
Chain that leverages the Vowpal Wabbit (VW) model for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call. |
|
Event class for PickBest chain. |
|
Embed the BasedOn and ToSelectFrom inputs into a format that can be used by the learning policy. |
|
Random policy for PickBest chain. |
|
Selected class for PickBest chain. |
|
Vowpal Wabbit custom logger. |
Functions¶
|
Wrap a value to indicate that it should be based on. |
|
Wrap a value to indicate that it should be embedded. |
|
Wrap a value to indicate that it should be embedded and kept. |
|
Wrap a value to indicate that it should be selected from. |
|
Embed the actions or context using the SentenceTransformer model (or a model that has an encode function). |
|
Embed a dictionary item. |
|
Embed a list item. |
|
Embed a string or an _Embed object. |
Get the BasedOn and ToSelectFrom from the inputs. |
|
Check if an item is a string. |
|
|
Parse the input string into a list of examples. |
Prepare the inputs for auto embedding. |
|
|
Convert an embedding to a string. |
langchain_experimental.smart_llm
¶
SmartGPT chain is applying self-critique using the SmartGPT workflow.
See details at https://youtu.be/wVzuvf9D9BU
The workflow performs these 3 steps: 1. Ideate: Pass the user prompt to an Ideation LLM n_ideas times,
each result is an “idea”
Critique: Pass the ideas to a Critique LLM which looks for flaws in the ideas & picks the best one
Resolve: Pass the critique to a Resolver LLM which improves upon the best idea & outputs only the (improved version of) the best output
In total, the SmartGPT workflow will use n_ideas+2 LLM calls
Note that SmartLLMChain will only improve results (compared to a basic LLMChain), when the underlying models have the capability for reflection, which smaller models often don’t.
Finally, a SmartLLMChain assumes that each underlying LLM outputs exactly 1 result.
Classes¶
Chain for applying self-critique using the SmartGPT workflow. |
langchain_experimental.sql
¶
SQL Chain interacts with SQL Database.
Classes¶
Chain for interacting with SQL Database. |
|
Chain for querying SQL database that is a sequential chain. |
|
Chain for interacting with Vector SQL Database. |
|
Output Parser for Vector SQL. |
|
Parser based on VectorSQLOutputParser. |
Functions¶
|
Get result from SQL Database. |
langchain_experimental.tabular_synthetic_data
¶
Generate tabular synthetic data using LLM and few-shot template.
Classes¶
Generate synthetic data using the given LLM and few-shot template. |
Functions¶
|
Create an instance of SyntheticDataGenerator tailored for OpenAI models. |
langchain_experimental.text_splitter
¶
Experimental text splitter based on semantic similarity.
Classes¶
|
Split the text based on semantic similarity. |
Functions¶
Calculate cosine distances between sentences. |
|
|
Combine sentences based on buffer size. |
langchain_experimental.tools
¶
Experimental Python REPL tools.
Classes¶
Tool for running python code in a REPL. |
|
Python inputs. |
|
Tool for running python code in a REPL. |
Functions¶
Sanitize input to the python REPL. |
langchain_experimental.tot
¶
Implementation of a Tree of Thought (ToT) chain based on the paper [Large Language Model Guided Tree-of-Thought](https://arxiv.org/pdf/2305.08291.pdf).
The Tree of Thought (ToT) chain uses a tree structure to explore the space of possible solutions to a problem.
Classes¶
Chain implementing the Tree of Thought (ToT). |
|
Tree of Thought (ToT) checker. |
|
Tree of Thought (ToT) controller. |
|
|
Memory for the Tree of Thought (ToT) chain. |
Parse and check the output of the language model. |
|
Parse the output of a PROPOSE_PROMPT response. |
|
A thought in the ToT. |
|
|
Enum for the validity of a thought. |
Base class for a thought generation strategy. |
|
Strategy that is sequentially using a "propose prompt". |
|
Sample strategy from a Chain-of-Thought (CoT) prompt. |
Functions¶
Get the prompt for the Chain of Thought (CoT) chain. |
|
Get the prompt for the PROPOSE_PROMPT chain. |
langchain_experimental.utilities
¶
Utility that simulates a standalone Python REPL.
Classes¶
Simulates a standalone Python REPL. |