langchain_community.document_loaders.word_document.Docx2txtLoader¶

class langchain_community.document_loaders.word_document.Docx2txtLoader(file_path: Union[str, Path])[source]¶

Load DOCX file using docx2txt and chunks at character level.

Defaults to check for local file, but if the file is a web path, it will download it to a temporary file, and use that, then clean up the temporary file after completion

Initialize with file path.

Methods

__init__(file_path)

Initialize with file path.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

A lazy loader for Documents.

load()

Load given path as single page.

load_and_split([text_splitter])

Load Documents and split into chunks.

Parameters

file_path (Union[str, Path]) –

__init__(file_path: Union[str, Path])[source]¶

Initialize with file path.

Parameters

file_path (Union[str, Path]) –

async alazy_load() AsyncIterator[Document]¶

A lazy loader for Documents.

Return type

AsyncIterator[Document]

async aload() List[Document]¶

Load data into Document objects.

Return type

List[Document]

lazy_load() Iterator[Document]¶

A lazy loader for Documents.

Return type

Iterator[Document]

load() List[Document][source]¶

Load given path as single page.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

Examples using Docx2txtLoader¶