A large e-commerce company uses a language model (LLM) to assist its customer service agents by generating responses to customer queries. However, the company is concerned about prompt engineering attacks, where malicious users craft inputs to manipulate the LLM into producing incorrect or harmful responses. What is the best approach to mitigate this issue?