The legacy Apache Hadoop servers at DataSphere Analytics are nearing their end-of-life, and the IT department has chosen to migrate the cluster to Google Cloud Dataproc . To perform a like-for-like migration, each node would require 50 TB of Google Persistent Disk. The Chief Information Officer is concerned about the high cost associated with this amount of block storage. As a data engineer, how can you minimize the storage costs during the migration?