
Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module "Identify features and uses of speech capabilities", speech recognition refers to the process of converting spoken words into written text. When a speaker's voice is transcribed into subtitles during a presentation, the system listens to the audio input, identifies the spoken words, and generates corresponding text in real time. This is precisely what speech recognition technology accomplishes.
Azure provides this functionality through the Azure Speech Service, which supports multiple speech-related features:
* Speech-to-Text (Speech Recognition) - Converts spoken audio into text.
* Text-to-Speech (Speech Synthesis) - Converts written text into spoken audio.
* Speech Translation - Translates spoken words into another language.
In this case, the session is transcribed into subtitles in the same language, not translated or spoken aloud, so the correct feature is Speech Recognition.
Let's review the other options:
* Sentiment Analysis: This belongs to the Text Analytics service under natural language processing (NLP) and is used to determine the emotional tone of text, not to convert speech to text.
* Speech Synthesis: Converts text into audible speech (Text-to-Speech), the reverse of what is happening in this scenario.
* Translation: Converts spoken or written words from one language to another. Here, no translation is mentioned-only transcription.
Therefore, the described process-turning live spoken language into readable subtitles-is an example of Speech Recognition, a speech-to-text AI capability provided by Azure Cognitive Services.
Final answer: Speech recognition
Reference:Microsoft Learn - Identify speech capabilities of Azure AI services (AI-900 Learning Path)