Your team is working on a big data project, and you've been tasked to setup a Hadoop/Spark cluster for data processing using Google Cloud Dataproc. The data scientists need to be able to submit their jobs and have the cluster scale down when not in use to minimize costs. How would you configure this setup?