正解:D
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module "Describe features of machine learning on Azure", a classification model outputs a probability score representing how likely each input belongs to a particular class. To decide whether a prediction is "positive" or "negative," the model applies a threshold (often defaulted to 0.5). Adjusting this threshold directly affects the balance between false positives and false negatives.
* A false positive occurs when the model incorrectly predicts a positive outcome (for example, predicting that a patient has a disease when they do not).
* A false negative occurs when the model fails to predict a true positive (for example, predicting that a patient does not have a disease when they actually do).
To reduce false positives, you must make the model less likely to classify borderline cases as positive. This is done by increasing the decision threshold, thereby favoring false negatives (because the model will only classify a case as positive when the prediction confidence is very high). In other words, by moving the threshold upward, you tighten the model's standard for what qualifies as a "positive" prediction, reducing incorrect positives.
Let's review why other options are incorrect:
* A. Include test data in training data: This contaminates your dataset and causes overfitting, which leads to unreliable performance metrics.
* B. Increase the number of training iterations: This may improve learning but doesn't specifically target false positives.
* C. Modify the threshold in favor of false positives: That would increase, not reduce, false positives.
Therefore, the correct step to reduce false positives is to adjust the threshold in favor of false negatives, making the model more conservative when labeling a case as positive - hence, answer: D.