
Explanation:
Safety system.
According to the Microsoft Learn documentation and the AI-900: Microsoft Azure AI Fundamentals official study guide, the safety system layer in generative AI architecture plays a crucial role in monitoring, filtering, and mitigating harmful or unsafe model outputs. This layer works alongside the model and user experience layers to ensure that generative AI systems-such as those powered by Azure OpenAI-produce responses that are safe, aligned, and responsible.
The safety system layer uses various techniques including content filtering, prompt moderation, and policy enforcement to prevent outputs that could be harmful, biased, misleading, or inappropriate. It evaluates both user inputs (prompts) and model-generated outputs to identify and block unsafe or unethical content. The system might use predefined rules, classifiers, or human feedback signals to decide whether to allow, modify, or stop a response.
In contrast, the other layers serve different purposes:
* The model layer contains the core large language or generative model (e.g., GPT or DALL-E) that processes inputs and produces outputs.
* The metaprompt and grounding layer ensures the model's responses are contextually relevant and factually supported, often linking to organizational data sources or system prompts.
* The user experience layer defines how users interact with the AI system, including the interface and conversational flow, but does not manage safety enforcement.
Therefore, the layer that uses system inputs and context to mitigate harmful outputs from a generative AI model is the Safety system layer.
This aligns with Microsoft's responsible AI principles-Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability-ensuring generative AI operates ethically and safely.