This is a dedicated watch page for a single video.
A company is using Amazon Bedrock and it wants to set an upper limit on the number of tokens returned in the model's response. Which of the following inference parameters would you recommend for the given use case?