You are a data scientist at a credit risk management company building a machine learning model to predict loan defaults. To ensure transparency and regulatory compliance, you need to explain how the model makes its predictions, particularly for high-stakes decisions such as loan approvals or rejections. The company wants a detailed understanding of the influence of individual features on the model’s predictions for specific customers, as well as an overall view of how features impact the model's predictions across the entire dataset. Which of the following explanations BEST describes the differences between Shapley values and Partial Dependence Plots (PDP) in the context of model explainability, and how you might use them for this purpose?