You've recently deployed a model to a Vertex AI gcp video

 ·  PT1H46M27S  ·  EN

ml-engineer-pro video for you've recently deployed a model to a Vertex AI endpoint and configured online serving in Vertex AI Feature Store. As part of your

Full Certification Question

You've recently deployed a model to a Vertex AI endpoint and configured online serving in Vertex AI Feature Store. As part of your setup, you've scheduled a daily batch ingestion job to update your feature store. However, during these batch ingestion processes, you notice high CPU utilization in your feature store's online serving nodes, leading to increased feature retrieval latency. To enhance online serving performance during these daily batch ingestion tasks, what should you do?