A data engineering team is working on a POC databricks video
data-engineer-professional video for a data engineering team is working on a POC for which they need to populate a downstream table using a source table that
Full Certification Question
A data engineering team is working on a POC for which they need to populate a downstream table using a source table that was created with Change Data Feed enabled. The upstream table has 45 columns while the downstream table should have 47 columns with 2 added columns - typeOfChange and version where the typeOfChange column depicts the type of change while the version column signifies the version number for the change. Also, the target table should be truncated and loaded with new data whenever the query is re-run. spark . read . format ( 'delta' ) . option ( 'readChangeFeed' , 'true' ) . option ( 'startingVersion' , 0 ) . table ( 'sourceTable' ) _______ ( 1 ) ________ _______ ( 2 ) ________ . write . format ( 'delta' ) _______ ( 3 ) ________ . saveAsTable ( 'targetTable' ) Which of the following correctly fills the numbered blanks to achieve the expected result?