langchain.agents.chat.base.ChatAgent

class langchain.agents.chat.base.ChatAgent[source]

Bases: Agent

Deprecated since version 0.1.0: Use create_react_agent instead.

Chat Agent.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param allowed_tools: Optional[List[str]] = None

Allowed tools for the agent. If None, all tools are allowed.

param llm_chain: LLMChain [Required]

LLMChain to use for agent.

param output_parser: AgentOutputParser [Optional]

Output parser for the agent.

async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish]

Async given input, decided what to do.

Parameters
Returns

Action specifying what tool to use.

Return type

Union[AgentAction, AgentFinish]

classmethod create_prompt(tools: Sequence[BaseTool], system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) BasePromptTemplate[source]

Create a prompt from a list of tools.

Parameters
  • tools (Sequence[BaseTool]) – A list of tools.

  • system_message_prefix (str) – The system message prefix. Default is SYSTEM_MESSAGE_PREFIX.

  • system_message_suffix (str) – The system message suffix. Default is SYSTEM_MESSAGE_SUFFIX.

  • human_message (str) – The human message. Default is HUMAN_MESSAGE.

  • format_instructions (str) – The format instructions. Default is FORMAT_INSTRUCTIONS.

  • input_variables (Optional[List[str]]) – The input variables. Default is None.

Returns

A prompt template.

Return type

BasePromptTemplate

classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) Agent[source]

Construct an agent from an LLM and tools.

Parameters
  • llm (BaseLanguageModel) – The language model.

  • tools (Sequence[BaseTool]) – A list of tools.

  • callback_manager (Optional[BaseCallbackManager]) – The callback manager. Default is None.

  • output_parser (Optional[AgentOutputParser]) – The output parser. Default is None.

  • system_message_prefix (str) – The system message prefix. Default is SYSTEM_MESSAGE_PREFIX.

  • system_message_suffix (str) – The system message suffix. Default is SYSTEM_MESSAGE_SUFFIX.

  • human_message (str) – The human message. Default is HUMAN_MESSAGE.

  • format_instructions (str) – The format instructions. Default is FORMAT_INSTRUCTIONS.

  • input_variables (Optional[List[str]]) – The input variables. Default is None.

  • kwargs (Any) – Additional keyword arguments.

Returns

An agent.

Return type

Agent

get_allowed_tools() Optional[List[str]]

Get allowed tools.

Return type

Optional[List[str]]

get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) Dict[str, Any]

Create the full inputs for the LLMChain from intermediate steps.

Parameters
  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.

  • **kwargs (Any) – User inputs.

Returns

Full inputs for the LLMChain.

Return type

Dict[str, Any]

plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish]

Given input, decided what to do.

Parameters
Returns

Action specifying what tool to use.

Return type

Union[AgentAction, AgentFinish]

return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish

Return response when agent has been stopped due to max iterations.

Parameters
  • early_stopping_method (str) – Method to use for early stopping.

  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.

  • **kwargs (Any) – User inputs.

Returns

Agent finish object.

Return type

AgentFinish

Raises

ValueError – If early_stopping_method is not in [‘force’, ‘generate’].

save(file_path: Union[Path, str]) None

Save the agent.

Parameters

file_path (Union[Path, str]) – Path to file to save the agent to.

Return type

None

Example: .. code-block:: python

# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)

tool_run_logging_kwargs() Dict

Return logging kwargs for tool run.

Return type

Dict

property llm_prefix: str

Prefix to append the llm call with.

property observation_prefix: str

Prefix to append the observation with.

property return_values: List[str]

Return values of the agent.