pyspark.sql.streaming.DataStreamReader.parquet

DataStreamReader.parquet(path, mergeSchema=None, pathGlobFilter=None, recursiveFileLookup=None)[source]

Loads a Parquet file stream, returning the result as a DataFrame.

New in version 2.0.0.

Parameters:
mergeSchemastr or bool, optional

sets whether we should merge schemas collected from all Parquet part-files. This will override spark.sql.parquet.mergeSchema. The default value is specified in spark.sql.parquet.mergeSchema.

pathGlobFilterstr or bool, optional

an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of partition discovery.

recursiveFileLookupstr or bool, optional

recursively scan a directory for files. Using this option disables partition discovery. # noqa

Examples

>>> parquet_sdf = spark.readStream.schema(sdf_schema).parquet(tempfile.mkdtemp())
>>> parquet_sdf.isStreaming
True
>>> parquet_sdf.schema == sdf_schema
True