正解:B
The scenario requires two checks on each photo: (1) there is at least one face, and (2) at least one detected face is wearing sunglasses. The Azure AI Face service - Detect operation is purpose-built for this combination. It detects faces and returns per-face attributes, including glasses type, so you can enforce both rules in a single pass. From the official guidance, the Detect API "detects human faces in an image and returns the rectangle coordinates of their locations" and exposes face attributes such as glasses. A concise attribute extract states: "Glasses: NoGlasses, ReadingGlasses, Sunglasses, Swimming Goggles." With this, you can count faces (requirement 1) and then verify that at least one face's glasses attribute equals sunglasses (requirement 2).
By contrast, other options don't align as precisely:
* A. Verify (Face service) compares whether two detected faces belong to the same person. It does not provide content attributes like sunglasses; it requires face inputs for identity/one-to-one scenarios, which doesn't meet your content-filter goal.
* C. Describe Image (Computer Vision) returns a natural-language caption of the whole image. While a caption might mention "a person wearing sunglasses," it's not guaranteed, is not face-scoped, and offers less deterministic filtering than a structured attribute on a detected face.
* D. Analyze Image (Computer Vision) can return tags such as "person" or sometimes "sunglasses," but those tags are image-level and not bound to specific faces. You need to ensure that a detected face (not just any region) is wearing sunglasses. Face-scoped attributes from Face Detect are more reliable for this logic.
Therefore, the most accurate and exam-aligned choice is B. the Detect operation in the Face service, because it allows you to programmatically confirm face presence and per-face sunglasses in a precise, rule-driven workflow.