Your team is running Apache Spark jobs on Dataproc gcp video

 ·  PT1H46M27S  ·  EN

engineer-associate video for your team is running Apache Spark jobs on Dataproc clusters to process large datasets. You notice that the cluster’s preemptible

Full Certification Question

Your team is running Apache Spark jobs on Dataproc clusters to process large datasets. You notice that the cluster’s preemptible workers are being aggressively decommissioned during the job, causing the job to restart tasks and take longer to complete. You want to reduce costs without impacting the job’s runtime. What should you do?