Video upload date:  · Duration: PT1H46M27S  · Language: EN

A journalist uses a foundation model to help draft gcp video

generative-ai-leader video for a journalist uses a foundation model to help draft articles. On one occasion, when asked to write about a fictional character

This is a dedicated watch page for a single video.

{ "query": "A foundation model generates plausible details that are not supported by the source text. Which limitation does this demonstrate?", "options": [ { "text": "Overfitting", "explanation": "The model memorizes training data and fails to generalize to new inputs", "correct": false, "selected": false }, { "text": "Hallucination", "explanation": "The model generates fluent content that is not grounded in the input or facts", "correct": true, "selected": false }, { "text": "Knowledge cutoff", "explanation": "The model lacks awareness of facts that occurred after its training data period", "correct": false, "selected": false } ], "answer": "

The correct option is Hallucination.

When a foundation model produces convincing details that are not present in the supplied source text it is fabricating unsupported content. This behavior is a failure to ground the response in the provided context and is the hallmark of Hallucination.

Overfitting describes a model that memorizes its training data and then performs poorly on new or unseen data. The scenario in the question is about inventing details beyond the given context rather than poor generalization from training to test data.

Knowledge cutoff refers to a model not knowing facts that occurred after its training data ended. The prompt here provides the necessary context yet the model adds details that are not in that context which is different from the model lacking up to date knowledge.

", "batch_id": "99", "answerCode": "2", "type": "multiple-choice", "originalQuery": "A journalist uses a foundation model to help draft articles. On one occasion, when asked to write about a fictional character from a novel, the AI generates a detailed biography, including specific events and relationships that were never mentioned in the actual book. The generated text is fluent and plausible but factually incorrect with respect to the source material. What common limitation of foundation models does this primarily illustrate?", "originalOptions": "A. Knowledge Cutoff
B. Edge Cases
C. Hallucination
D. Bias", "domain": "Techniques to improve gen AI model output", "hasImage": false, "queryImage": "", "queryImages": [], "deprecatedReference": false, "deprecatedMatches": {}, "qid": "37s", "tip": "

Look for cues like not in the provided source or unsupported by the context which point to hallucination. Think of overfitting as train versus test performance and knowledge cutoff as missing information after a certain date.

", "references": [ "https://cloud.google.com/architecture/ai-ml/glossary#hallucination", "https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/grounding", "https://developers.google.com/machine-learning/glossary#overfitting", "https://ai.google.dev/gemini-api/faq" ], "video_url": "https://certificationation.com/videos/gcp/generative-ai-leader/gcp-model-to-help-draft-articles-exam-037.html", "url": "https://certificationation.com/questions/gcp/generative-ai-leader/gcp-model-to-help-draft-articles-exam-037.html" }