A data engineer has configured a Structured databricks video
data-engineer-associate video for a data engineer has configured a Structured Streaming job to read from a dataset, perform some transformations, and then write
Full Certification Question
A data engineer has configured a Structured Streaming job to read from a dataset, perform some transformations, and then write the output to a new dataset. The code block used by the engineer is shown below: spark . readStream . table ( "customer_data" ) . withColumn ( "avg_amount" , col ( "total_spent" ) / col ( "items_bought" )) . writeStream . option ( "checkpointLocation" , checkpointDir ) . outputMode ( "append" ) . _________ . table ( "processed_customer_data" ) If the data engineer wants the query to execute in multiple micro-batches and process all the available data, eventually stopping automatically, which of the following lines of code should be used to fill in the blank above?