langchain.agents.mrkl.base
.ZeroShotAgent¶
- class langchain.agents.mrkl.base.ZeroShotAgent[source]¶
Bases:
Agent
Deprecated since version 0.1.0: Use
create_react_agent
instead.Agent for the MRKL chain.
- Parameters
output_parser – Output parser for the agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- param allowed_tools: Optional[List[str]] = None¶
Allowed tools for the agent. If None, all tools are allowed.
- param output_parser: AgentOutputParser [Optional]¶
Output parser to use for agent.
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] ¶
Async given input, decided what to do.
- Parameters
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns
Action specifying what tool to use.
- Return type
Union[AgentAction, AgentFinish]
- classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) PromptTemplate [source]¶
Create prompt in the style of the zero shot agent.
- Parameters
tools (Sequence[BaseTool]) – List of tools the agent will have access to, used to format the prompt.
prefix (str) – String to put before the list of tools. Defaults to PREFIX.
suffix (str) – String to put after the list of tools. Defaults to SUFFIX.
format_instructions (str) – Instructions on how to use the tools. Defaults to FORMAT_INSTRUCTIONS
input_variables (Optional[List[str]]) – List of input variables the final prompt will expect. Defaults to None.
- Returns
A PromptTemplate with the template assembled from the pieces here.
- Return type
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) Agent [source]¶
Construct an agent from an LLM and tools.
- Parameters
llm (BaseLanguageModel) – The LLM to use as the agent LLM.
tools (Sequence[BaseTool]) – The tools to use.
callback_manager (Optional[BaseCallbackManager]) – The callback manager to use. Defaults to None.
output_parser (Optional[AgentOutputParser]) – The output parser to use. Defaults to None.
prefix (str) – The prefix to use. Defaults to PREFIX.
suffix (str) – The suffix to use. Defaults to SUFFIX.
format_instructions (str) – The format instructions to use. Defaults to FORMAT_INSTRUCTIONS.
input_variables (Optional[List[str]]) – The input variables to use. Defaults to None.
kwargs (Any) – Additional parameters to pass to the agent.
- Return type
- get_allowed_tools() Optional[List[str]] ¶
Get allowed tools.
- Return type
Optional[List[str]]
- get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) Dict[str, Any] ¶
Create the full inputs for the LLMChain from intermediate steps.
- Parameters
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
**kwargs (Any) – User inputs.
- Returns
Full inputs for the LLMChain.
- Return type
Dict[str, Any]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] ¶
Given input, decided what to do.
- Parameters
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns
Action specifying what tool to use.
- Return type
Union[AgentAction, AgentFinish]
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish ¶
Return response when agent has been stopped due to max iterations.
- Parameters
early_stopping_method (str) – Method to use for early stopping.
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
**kwargs (Any) – User inputs.
- Returns
Agent finish object.
- Return type
- Raises
ValueError – If early_stopping_method is not in [‘force’, ‘generate’].
- save(file_path: Union[Path, str]) None ¶
Save the agent.
- Parameters
file_path (Union[Path, str]) – Path to file to save the agent to.
- Return type
None
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
- tool_run_logging_kwargs() Dict ¶
Return logging kwargs for tool run.
- Return type
Dict
- property llm_prefix: str¶
Prefix to append the llm call with.
- Returns
”
- Return type
“Thought
- property observation_prefix: str¶
Prefix to append the observation with.
- Returns
”
- Return type
“Observation
- property return_values: List[str]¶
Return values of the agent.