Azure OpenAI GPT-3.5 大規模言語モデル (LLM) を使用して技術的な質問に回答するチャットボットがあります。チャットボットを正確に説明している 2 つのステートメントはどれですか? それぞれの正解は完全な解決策を示します。
注: 各正解は 1 ポイントの価値があります。
正解:A,C
The correct answers are A. Grounding data can be used to constrain the output of the chatbot and C. The chatbot might respond with inaccurate data.
According to the Microsoft Azure AI Fundamentals (AI-900) study material and Microsoft Learn modules on Azure OpenAI, a chatbot built with Azure OpenAI GPT-3.5 is a large language model (LLM) capable of generating natural language responses. However, these models operate based on statistical patterns learned from massive text datasets-they do not inherently guarantee factual accuracy. Hence, while GPT-based models can produce highly coherent text, they may sometimes generate inaccurate, outdated, or fabricated information (commonly referred to as "hallucinations"). This makes C correct.
Grounding data, as described in Microsoft's Responsible AI and Azure OpenAI grounding documentation, refers to integrating trusted external data sources-such as company documents, databases, or knowledge bases-into the prompt context. This helps the model stay aligned with factual or domain-specific content, effectively constraining its output to be relevant and verifiable. Therefore, A is also correct.
Options B and D are incorrect because GPT models do not always provide accurate information, and they are not approved for critical use cases such as medical diagnosis. Microsoft's Responsible AI principles explicitly prohibit unverified use in healthcare or other high-risk domains.
Thus, the verified answers are A and C.