You are planning to migrate an Apache Spark 3 batch job from your on-premises environment to Google Cloud. You want the job to read data from Cloud Storage and write the results to BigQuery, with minimal code changes. The job has been optimized for Spark and is designed to run with executors configured with 8 vCPUs and 16 GB of memory. You also want to retain control over such resource specifications, while reducing the operational overhead of setup and management. What is the most suitable option?