langchain.document_loaders.concurrent.ConcurrentLoader

class langchain.document_loaders.concurrent.ConcurrentLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4)[source]

Load and pars Documents concurrently.

A generic document loader.

Parameters
  • blob_loader – A blob loader which knows how to yield blobs

  • blob_parser – A blob parser which knows how to parse blobs into documents

Methods

__init__(blob_loader, blob_parser[, num_workers])

A generic document loader.

from_filesystem(path, *[, glob, exclude, ...])

Create a concurrent generic document loader using a filesystem blob loader.

lazy_load()

Load documents lazily with concurrent parsing.

load()

Load all documents.

load_and_split([text_splitter])

Load all documents and split them into sentences.

__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4) None[source]

A generic document loader.

Parameters
  • blob_loader – A blob loader which knows how to yield blobs

  • blob_parser – A blob parser which knows how to parse blobs into documents

classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default', num_workers: int = 4) ConcurrentLoader[source]

Create a concurrent generic document loader using a filesystem blob loader.

Parameters
  • path – The path to the directory to load documents from.

  • glob – The glob pattern to use to find documents.

  • suffixes – The suffixes to use to filter documents. If None, all files matching the glob will be loaded.

  • exclude – A list of patterns to exclude from the loader.

  • show_progress – Whether to show a progress bar or not (requires tqdm). Proxies to the file system loader.

  • parser – A blob parser which knows how to parse blobs into documents

  • num_workers – Max number of concurrent workers to use.

lazy_load() Iterator[Document][source]

Load documents lazily with concurrent parsing.

load() List[Document]

Load all documents.

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]

Load all documents and split them into sentences.

Examples using ConcurrentLoader