正解:A
For evaluating a classification model, the appropriate metric from the options provided is the True Positive Rate (TPR), also known as Sensitivity or Recall. According to the Microsoft Azure AI Fundamentals (AI-
900) official study guide and Microsoft Learn module "Evaluate model performance", classification models are evaluated using metrics that measure how accurately the model predicts categorical outcomes such as "yes
/no," "spam/not spam," or "approved/denied."
The True Positive Rate measures the proportion of correctly identified positive cases out of all actual positive cases. Mathematically, it is expressed as:
True Positive Rate (Recall)=True PositivesTrue Positives + False Negatives\text{True Positive Rate (Recall)}
= \frac{\text{True Positives}}{\text{True Positives + False Negatives}}True Positive Rate (Recall)
=True Positives + False NegativesTrue Positives
This metric is important when missing positive predictions carries a high cost, such as in medical diagnosis or fraud detection. Microsoft Learn highlights classification evaluation metrics such as accuracy, precision, recall, F1 score, and AUC (Area Under the Curve) as suitable for classification models.
The other options-Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²)-are regression metrics used to evaluate models that predict numeric values rather than categories. For example, they apply to predicting house prices or temperatures, not yes/no decisions.
Therefore, the correct classification evaluation metric among the choices is A. True Positive Rate.
Reference:Microsoft Learn - Evaluate model performance - Understand metrics for classification and regression models