正解:C
In Conversational Language Understanding (CLU), a core service within Azure AI Language, intents represent the goals or purposes behind user utterances (for example, "Track my order" or "Cancel my subscription"). However, in real-world scenarios, users often provide inputs that do not match any defined intent. To handle such cases gracefully, Microsoft recommends including a "None" intent that captures out-of- scope utterances - text that doesn't belong to any other intent in your model.
According to the Microsoft Learn module: "Build a Conversational Language Understanding app", the None intent serves as a catch-all or fallback category for utterances that the model should ignore or respond to with a default message (e.g., "I'm sorry, I don't understand that."). By training the model with multiple examples of irrelevant or unrelated utterances in this intent, you improve its ability to distinguish between valid and invalid user inputs.
The other options are incorrect:
* A. Export the model: Exporting only saves or transfers the model; it does not influence how the model detects irrelevant utterances.
* B. Create a new model: A new model would not inherently solve out-of-scope detection unless properly trained with a None intent.
* D. Create a prebuilt task entity: Entities identify specific data (like dates or products) within valid intents, not irrelevant utterances.
Thus, the correct approach to ensure that your CLU model can detect utterances outside its intended scope is to add examples of unrelated or off-topic phrases to the None intent. This improves classification accuracy and prevents incorrect intent matches.
# Correct answer: C. Add utterances to the None intent