langchain.agents.conversational.output_parser.ConvoOutputParser

class langchain.agents.conversational.output_parser.ConvoOutputParser[source]

Bases: AgentOutputParser

Output parser for the conversational agent.

param ai_prefix: str = 'AI'

Prefix to use before AI output.

async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.

async ainvoke(input: str | langchain_core.messages.base.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) T

Default implementation of ainvoke, calls invoke from a thread.

The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke.

Subclasses should override this method if they can run asynchronously.

async aparse(text: str) T

Parse a single string model output into some structure.

Parameters

text – String output of a language model.

Returns

Structured output.

async aparse_result(result: List[Generation], *, partial: bool = False) T

Parse a list of candidate model Generations into a specific format.

The return value is parsed from only the first Generation in the result, which

is assumed to be the highest-likelihood Generation.

Parameters

result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input.

Returns

Structured output.

async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) AsyncIterator[Output]

Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.

async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]

Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The jsonpatch ops can be applied in order to construct state.

async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) AsyncIterator[Output]

Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated.

batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.

bind(**kwargs: Any) Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

config_schema(*, include: Optional[Sequence[str]] = None) Type[BaseModel]

The type of config this runnable accepts specified as a pydantic model.

To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.

Parameters

include – A list of fields to include in the config schema.

Returns

A pydantic model that can be used to validate config.

configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) RunnableSerializable[Input, Output]
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) RunnableSerializable[Input, Output]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict

Return dictionary representation of output parser.

classmethod from_orm(obj: Any) Model
get_format_instructions() str[source]

Instructions on how the LLM output should be formatted.

get_input_schema(config: Optional[RunnableConfig] = None) Type[BaseModel]

Get a pydantic model that can be used to validate input to the runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with.

This method allows to get an input schema for a specific configuration.

Parameters

config – A config to use when generating the schema.

Returns

A pydantic model that can be used to validate input.

classmethod get_lc_namespace() List[str]

Get the namespace of the langchain object.

For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”]

get_output_schema(config: Optional[RunnableConfig] = None) Type[BaseModel]

Get a pydantic model that can be used to validate output to the runnable.

Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with.

This method allows to get an output schema for a specific configuration.

Parameters

config – A config to use when generating the schema.

Returns

A pydantic model that can be used to validate output.

invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) T

Transform a single input into an output. Override to implement.

Parameters
  • input – The input to the runnable.

  • config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

Returns

The output of the runnable.

classmethod is_lc_serializable() bool

Is this class serializable?

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod lc_id() List[str]

A unique identifier for this class for serialization purposes.

The unique identifier is a list of strings that describes the path to the object.

map() Runnable[List[Input], List[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.

parse(text: str) Union[AgentAction, AgentFinish][source]

Parse text into agent action/finish.

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
parse_result(result: List[Generation], *, partial: bool = False) T

Parse a list of candidate model Generations into a specific format.

The return value is parsed from only the first Generation in the result, which

is assumed to be the highest-likelihood Generation.

Parameters

result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input.

Returns

Structured output.

parse_with_prompt(completion: str, prompt: PromptValue) Any

Parse the output of an LLM call with the input prompt for context.

The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.

Parameters
  • completion – String output of a language model.

  • prompt – Input PromptValue.

Returns

Structured output

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) Iterator[Output]

Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output.

to_json() Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() SerializedNotImplemented
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) Iterator[Output]

Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) RunnableWithFallbacksT[Input, Output]

Add fallbacks to a runnable, returning a new Runnable.

Parameters
  • fallbacks – A sequence of runnables to try if the original runnable fails.

  • exceptions_to_handle – A tuple of exception types to handle.

Returns

A new Runnable that will try the original runnable, and then each fallback in order, upon failures.

with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) Runnable[Input, Output]

Bind lifecycle listeners to a Runnable, returning a new Runnable.

on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object.

The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.

with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) Runnable[Input, Output]

Create a new Runnable that retries the original runnable on exceptions.

Parameters
  • retry_if_exception_type – A tuple of exception types to retry on

  • wait_exponential_jitter – Whether to add jitter to the wait time between retries

  • stop_after_attempt – The maximum number of attempts to make before giving up

Returns

A new Runnable that retries the original runnable on exceptions.

with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) Runnable[Input, Output]

Bind input and output types to a Runnable, returning a new Runnable.

property InputType: Any

The type of input this runnable accepts specified as a type annotation.

property OutputType: Type[langchain_core.output_parsers.base.T]

The type of output this runnable produces specified as a type annotation.

property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]

List configurable fields for this runnable.

property input_schema: Type[pydantic.main.BaseModel]

The type of input this runnable accepts specified as a pydantic model.

property lc_attributes: Dict

List of attribute names that should be included in the serialized kwargs.

These attributes must be accepted by the constructor.

property lc_secrets: Dict[str, str]

A map of constructor argument names to secret ids.

For example,

{“openai_api_key”: “OPENAI_API_KEY”}

property output_schema: Type[pydantic.main.BaseModel]

The type of output this runnable produces specified as a pydantic model.