A rapidly scaling AI startup is training very large foundation models. They require access to massive amounts of specialized compute hardware, like TPUs and GPUs, along with high-speed networking and storage, all managed and maintained by a cloud provider. Which layer of the generative AI landscape is primarily providing these fundamental compute resources?