Join us on Discord!
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis and requires consistent responses to the same prompt. What inference parameter adjustment should the company make?