
Explanation:
facial analysis.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module "Describe features of computer vision workloads on Azure," facial analysis is a computer vision capability that detects faces and extracts attributes such as facial expressions, emotions, pose, occlusion, and image quality factors like exposure and noise. It does not identify or verify individual identities; rather, it interprets facial features and image characteristics to analyze conditions in an image.
In this question, the AI solution helps photographers take better portrait photos by providing feedback on exposure, noise, and occlusion - tasks directly linked to facial analysis. The model analyzes the detected face to determine if the image is well-lit, clear, and unobstructed, thereby improving photo quality. These capabilities are part of the Azure Face service in Azure Cognitive Services, which includes both facial detection and facial analysis functionalities.
Here's how the other options differ:
* Facial detection only identifies that a face exists in an image and provides its location using bounding boxes, without further interpretation.
* Facial recognition goes a step further - it attempts to identify or verify a person's identity by comparing the detected face with stored images. This is not what the scenario describes.
Thus, when an AI solution evaluates image quality aspects like exposure, noise, and occlusion, it's performing facial analysis, which focuses on understanding image and facial characteristics rather than identification.
In summary, based on Microsoft's AI-900 study material, this scenario demonstrates facial analysis, a subcategory of computer vision tasks within Azure Cognitive Services.
최근 댓글 (가장 최근 댓글이 맨 위에 있습니다.)
Thank you!
감사합니다!
thank you
thank you
감사합니다.
감사합니다
감사합니다
감사합니다.
감사합니다
공부 잘해서 합격하겠습니다.
Thank you ^^
give me dump
Thank you for helping