langchain_community.document_loaders.web_base.WebBaseLoader¶

class langchain_community.document_loaders.web_base.WebBaseLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Optional[Any] = None)[source]¶

Load HTML pages using urllib and parse them with `BeautifulSoup’.

Initialize loader.

Parameters
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Optional[Dict[str, Any]]) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 web page parsing

  • web_path (Union[str, Sequence[str]]) –

  • header_template (Optional[dict]) –

  • verify_ssl (bool) –

  • proxies (Optional[dict]) –

  • continue_on_failure (bool) –

  • autoset_encoding (bool) –

  • encoding (Optional[str]) –

  • session (Any) –

Attributes

web_path

Methods

__init__([web_path, header_template, ...])

Initialize loader.

alazy_load()

A lazy loader for Documents.

aload()

Load text from the urls in web_path async into Documents.

fetch_all(urls)

Fetch all urls concurrently with rate limiting.

lazy_load()

Lazy load text from the url(s) in web_path.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

scrape([parser])

Scrape data from webpage and return it in BeautifulSoup format.

scrape_all(urls[, parser])

Fetch all urls, then return soups for all results.

__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Optional[Any] = None) None[source]¶

Initialize loader.

Parameters
  • web_paths (Sequence[str]) – Web paths to load from.

  • requests_per_second (int) – Max number of concurrent requests to make.

  • default_parser (str) – Default parser to use for BeautifulSoup.

  • requests_kwargs (Optional[Dict[str, Any]]) – kwargs for requests

  • raise_for_status (bool) – Raise an exception if http status code denotes an error.

  • bs_get_text_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 get_text

  • bs_kwargs (Optional[Dict[str, Any]]) – kwargs for beatifulsoup4 web page parsing

  • web_path (Union[str, Sequence[str]]) –

  • header_template (Optional[dict]) –

  • verify_ssl (bool) –

  • proxies (Optional[dict]) –

  • continue_on_failure (bool) –

  • autoset_encoding (bool) –

  • encoding (Optional[str]) –

  • session (Optional[Any]) –

Return type

None

async alazy_load() AsyncIterator[Document]¶

A lazy loader for Documents.

Return type

AsyncIterator[Document]

aload() List[Document][source]¶

Load text from the urls in web_path async into Documents.

Return type

List[Document]

async fetch_all(urls: List[str]) Any[source]¶

Fetch all urls concurrently with rate limiting.

Parameters

urls (List[str]) –

Return type

Any

lazy_load() Iterator[Document][source]¶

Lazy load text from the url(s) in web_path.

Return type

Iterator[Document]

load() List[Document]¶

Load data into Document objects.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]¶

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

scrape(parser: Optional[str] = None) Any[source]¶

Scrape data from webpage and return it in BeautifulSoup format.

Parameters

parser (Optional[str]) –

Return type

Any

scrape_all(urls: List[str], parser: Optional[str] = None) List[Any][source]¶

Fetch all urls, then return soups for all results.

Parameters
  • urls (List[str]) –

  • parser (Optional[str]) –

Return type

List[Any]

Examples using WebBaseLoader¶