You are building a TensorFlow text-to-image generative model using a dataset containing billions of images and captions. You want to establish a low-maintenance, automated workflow that encompasses data ingestion, statistics collection, dataset splitting, data transformations, model training, validation, and testing. What should you do?