machine-learning video for you are an ML engineer at a startup building a recommendation engine for an e-commerce platform. Training jobs on large datasets are
You are an ML engineer at a startup building a recommendation engine for an e-commerce platform. Training jobs on large datasets are sporadic but compute-intensive, while the inference endpoint must handle variable traffic throughout the day. The company is cost-conscious and requires a solution that balances cost efficiency, scalability, and performance. Which resource allocation approach is MOST SUITABLE for training and inference, and why?