正解:B
According to the Microsoft Azure OpenAI Service documentation and AI-900 official study materials, the DALL-E model is specifically designed to generate and edit images from natural language prompts. When a user provides a descriptive text input such as "a futuristic city skyline at sunset", DALL-E interprets the textual prompt and produces an image that visually represents the content described. This functionality is known as text-to-image generation and is one of the creative AI capabilities supported by Azure OpenAI.
DALL-E belongs to the family of generative models that can create new visual content, expand existing images, or apply transformations to images based on textual instructions. Within Azure OpenAI, the DALL-E API enables developers to integrate image creation directly into applications-useful for design assistance, marketing content generation, or visualization tools. The model learns from vast datasets of text-image pairs and is optimized to ensure alignment, diversity, and accuracy in the produced visuals.
By contrast, the other options serve different purposes:
* A. GPT-4 is a large language model for text-based generation, reasoning, and conversation, not for creating images.
* C. GPT-3 is an earlier text generation model, primarily used for language tasks like summarization, classification, and question answering.
* D. Whisper is an automatic speech recognition (ASR) model used to convert spoken language into written text; it has no image-generation capability.
Therefore, when the requirement is to generate images based on user prompts, the only Azure OpenAI model that fulfills this purpose is DALL-E. This aligns directly with the AI-900 learning objective covering Azure OpenAI generative capabilities for text, code, and image creation.