engineer-associate video for your team is running Apache Spark jobs on Dataproc clusters to process large datasets. You notice that the cluster’s preemptible
Your team is running Apache Spark jobs on Dataproc clusters to process large datasets. You notice that the cluster’s preemptible workers are being aggressively decommissioned during the job, causing the job to restart tasks and take longer to complete. You want to reduce costs without impacting the job’s runtime. What should you do?