A retail company is building a web-based AI application using Amazon SageMaker to predict customer purchase behavior. The system must support full ML lifecycle features such as experimentation, training, centralized model registry, deployment, and monitoring. The training data is securely stored in Amazon S3, and the models need to be deployed to real-time endpoints to serve predictions. The company is now planning to run an on-demand workflow to monitor for bias drift in the deployed models to ensure fairness and accuracy in predictions. What do you recommend?