ai-practitioner video for an enterprise AI team is building a machine learning platform on AWS to support multiple departments. The platform includes
An enterprise AI team is building a machine learning platform on AWS to support multiple departments. The platform includes pre-trained models hosted on Amazon SageMaker, training pipelines, and data stored in Amazon S3. To comply with organizational security policies, the team must implement granular access control such that: Data scientists can invoke inference endpoints but cannot modify model artifacts. ML engineers can retrain models and update endpoints. Analysts can only view model performance metrics and logs without accessing underlying training data. As an AI practitioner, which AWS feature or mechanism is most appropriate to enforce resource-level and action-specific permissions across this distributed ML environment?