langchain.agents.agent
.RunnableAgent¶
- class langchain.agents.agent.RunnableAgent[source]¶
Bases:
BaseSingleActionAgent
Agent powered by Runnables.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- param input_keys_arg: List[str] = []¶
- param return_keys_arg: List[str] = []¶
- param runnable: Runnable[dict, Union[AgentAction, AgentFinish]] [Required]¶
Runnable to call to get agent action.
- param stream_runnable: bool = True¶
Whether to stream from the runnable or not.
- If True then underlying LLM is invoked in a streaming fashion to make it possible
to get access to the individual LLM tokens when using stream_log with the Agent Executor. If False then LLM is invoked in a non-streaming fashion and individual LLM tokens will not be available in stream_log.
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] [source]¶
Async based on past history and current inputs, decide what to do.
- Parameters
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns
Action specifying what tool to use.
- Return type
Union[AgentAction, AgentFinish]
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) BaseSingleActionAgent ¶
Construct an agent from an LLM and tools.
- Parameters
llm (BaseLanguageModel) – Language model to use.
tools (Sequence[BaseTool]) – Tools to use.
callback_manager (Optional[BaseCallbackManager]) – Callback manager to use.
kwargs (Any) – Additional arguments.
- Returns
Agent object.
- Return type
- get_allowed_tools() Optional[List[str]] ¶
- Return type
Optional[List[str]]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] [source]¶
Based on past history and current inputs, decide what to do.
- Parameters
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with the observations.
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to run.
**kwargs (Any) – User inputs.
- Returns
Action specifying what tool to use.
- Return type
Union[AgentAction, AgentFinish]
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish ¶
Return response when agent has been stopped due to max iterations.
- Parameters
early_stopping_method (str) – Method to use for early stopping.
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.
**kwargs (Any) – User inputs.
- Returns
Agent finish object.
- Return type
- Raises
ValueError – If early_stopping_method is not supported.
- save(file_path: Union[Path, str]) None ¶
Save the agent.
- Parameters
file_path (Union[Path, str]) – Path to file to save the agent to.
- Return type
None
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
- tool_run_logging_kwargs() Dict ¶
Return logging kwargs for tool run.
- Return type
Dict
- property input_keys: List[str]¶
Return the input keys.
- property return_values: List[str]¶
Return values of the agent.