langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader

class langchain_community.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]

Load PySpark DataFrames.

Initialize with a Spark DataFrame object.

Parameters
  • spark_session (Optional[SparkSession]) – The SparkSession object.

  • df (Optional[Any]) – The Spark DataFrame object.

  • page_content_column (str) – The name of the column containing the page content. Defaults to “text”.

  • fraction_of_memory (float) – The fraction of memory to use. Defaults to 0.1.

Methods

__init__([spark_session, df, ...])

Initialize with a Spark DataFrame object.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

get_num_rows()

Gets the number of "feasible" rows for the DataFrame

lazy_load()

A lazy loader for document content.

load()

Load from the dataframe.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]

Initialize with a Spark DataFrame object.

Parameters
  • spark_session (Optional[SparkSession]) – The SparkSession object.

  • df (Optional[Any]) – The Spark DataFrame object.

  • page_content_column (str) – The name of the column containing the page content. Defaults to “text”.

  • fraction_of_memory (float) – The fraction of memory to use. Defaults to 0.1.

async alazy_load() AsyncIterator[Document]

A lazy loader for Documents.

Return type

AsyncIterator[Document]

async aload() List[Document]

Load data into Document objects.

Return type

List[Document]

get_num_rows() Tuple[int, int][source]

Gets the number of “feasible” rows for the DataFrame

Return type

Tuple[int, int]

lazy_load() Iterator[Document][source]

A lazy loader for document content.

Return type

Iterator[Document]

load() List[Document][source]

Load from the dataframe.

Return type

List[Document]

load_and_split(text_splitter: Optional[TextSplitter] = None) List[Document]

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns

List of Documents.

Return type

List[Document]

Examples using PySparkDataFrameLoader