You are developing an AI-based recruiting system and are using GitHub Copilot to help write code that filters job applicants based on their qualifications. Given that Copilot’s training data might contain historical biases (e.g., gender, race), how can you ensure that the code it generates does not inadvertently introduce bias into the system?