Video upload date:  · Duration: PT1H46M27S  · Language: EN

An hourly batch job is configured to ingest databricks video

data-engineer-professional video for an hourly batch job is configured to ingest data files from a cloud object storage container where each batch represents

This is a dedicated watch page for a single video.

Full Certification Question

An hourly batch job is configured to ingest data files from a cloud object storage container where each batch represents all records produced by the source system in a given hour. The batch job processes records into the Lakehouse, ensuring no late-arriving data is missed. The user_id field represents a unique key for the data, with the schema: user_id BIGINT, username STRING, user_utc STRING, user_region STRING, last_login BIGINT, auto_pay BOOLEAN, last_updated BIGINT . New records are ingested into a table named account_history, maintaining a full record. The next table in the system, account_current, is a Type 1 table representing the most recent value for each unique user_id. Assuming millions of user accounts and tens of thousands of hourly records, which implementation efficiently updates the account_current table as part of each hourly batch job?