langchain_community.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader

class langchain_community.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]

Load from Hugging Face Hub datasets.

Initialize the HuggingFaceDatasetLoader.

Parameters
  • path (str) – Path or name of the dataset.

  • page_content_column (str) – Page content column name. Default is “text”.

  • name (Optional[str]) – Name of the dataset configuration.

  • data_dir (Optional[str]) – Data directory of the dataset configuration.

  • data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) – Path(s) to source data file(s).

  • cache_dir (Optional[str]) – Directory to read/write data.

  • keep_in_memory (Optional[bool]) – Whether to copy the dataset in-memory.

  • save_infos (bool) – Save the dataset information (checksums/size/splits/…). Default is False.

  • use_auth_token (Optional[Union[bool, str]]) – Bearer token for remote files on the Dataset Hub.

  • num_proc (Optional[int]) – Number of processes.

Methods

__init__(path[, page_content_column, name, ...])

Initialize the HuggingFaceDatasetLoader.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

Load documents lazily.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

parse_obj(page_content)

__init__(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]

Initialize the HuggingFaceDatasetLoader.

Parameters
  • path (str) – Path or name of the dataset.

  • page_content_column (str) – Page content column name. Default is “text”.

  • name (Optional[str]) – Name of the dataset configuration.

  • data_dir (Optional[str]) – Data directory of the dataset configuration.

  • data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) – Path(s) to source data file(s).

  • cache_dir (Optional[str]) – Directory to read/write data.

  • keep_in_memory (Optional[bool]) – Whether to copy the dataset in-memory.

  • save_infos (bool) – Save the dataset information (checksums/size/splits/…). Default is False.

  • use_auth_token (Optional[Union[bool, str]]) – Bearer token for remote files on the Dataset Hub.

  • num_proc (Optional[int]) – Number of processes.

async alazy_load() AsyncIterator[Document]

A lazy loader for Documents.

Return type

AsyncIterator[Document]

async aload() List[Document]

Load data into Document objects.

Return type

List[Document]

lazy_load() Iterator[Document][source]

Load documents lazily.

Return type

Iterator[Document]

load() List[Document]

Load data into Document objects.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

parse_obj(page_content: Union[str, object]) str[source]
Parameters

page_content (Union[str, object]) –

Return type

str

Examples using HuggingFaceDatasetLoader