A Generative AI Engineer has created a pipeline that chunks legal documents into structured sections with metadata like clause_id, title, and content_text. The engineer now needs to store this processed data for efficient retrieval using Databricks' native features while ensuring secure access via Unity Catalog. Which sequence of operations should the engineer perform to write this chunked data into Unity Catalog-compliant Delta tables?