ai-practitioner video for a financial services company has deployed a customer support chatbot powered by a generative AI model to handle queries about account
A financial services company has deployed a customer support chatbot powered by a generative AI model to handle queries about account balances, loan options, and transaction history. However, users have started experimenting with prompt injection attacks, where they craft inputs designed to manipulate the model's behavior for example, trying to override system instructions or produce misleading financial advice. To protect the chatbot from such manipulation and ensure reliable, safe responses, the development team wants to implement the best mitigation strategy against prompt injection. Which approach is most effective in this scenario?