Your analytics team wants to build a simple statistical model to identify which customers are most likely to return, based on several key metrics. The data is stored in Cloud Storage, and the model will be run using Apache Spark on Cloud Dataproc. During testing, you observed that the job completes in about 30 minutes on a 15-node cluster, and the results are written to BigQuery. The job will run once per week. What is the most cost-effective way to configure the Dataproc cluster for this recurring workload?