langchain.document_loaders.async_html.AsyncHtmlLoader

class langchain.document_loaders.async_html.AsyncHtmlLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, autoset_encoding: bool = True, encoding: Optional[str] = None, default_parser: str = 'html.parser', requests_per_second: int = 2, requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, ignore_load_errors: bool = False)[source]

Load HTML asynchronously.

Initialize with a webpage path.

Methods

__init__(web_path[, header_template, ...])

Initialize with a webpage path.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Lazy load text from the url(s) in web_path.

load()

Load text from the url(s) in web_path.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, autoset_encoding: bool = True, encoding: Optional[str] = None, default_parser: str = 'html.parser', requests_per_second: int = 2, requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, ignore_load_errors: bool = False)[source]

Initialize with a webpage path.

async fetch_all(urls: List[str]) Any[source]

Fetch all urls concurrently with rate limiting.

lazy_load() Iterator[Document][source]

Lazy load text from the url(s) in web_path.

load() List[Document][source]

Load text from the url(s) in web_path.

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]

Load Documents and split into chunks. Chunks are returned as Documents.

Parameters

text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Examples using AsyncHtmlLoader