BigQuery serves as your centralized analytics platform where new data is loaded daily, and an ETL pipeline modifies and prepares it for users. This pipeline undergoes frequent modifications, occasionally leading to undetected errors that may surface only after two weeks. You need to establish a method to recover from these errors while optimizing backup storage costs. How should you structure your data in BigQuery and manage your backups?