Page 24 - MSDN Magazine, June 2019
P. 24
Figure 2 The Custom Authorization Handler
identity is created, it may be assigned one or more claims issued by a trusted party. A claim is a name-value pair that represents what the subject is. In this case, I’m assigning the identity claim to the user in context. This claim is then retrieved in the Post action of the Access controller and returned as part of the API’s response.
The last step to perform to enable this custom authorization pro- cess is the registration of the handler within the Web API. Handlers are registered in the services collection during configuration:
services.AddSingleton<IAuthorizationHandler, FaceRequirementHandler>(); services.AddSingleton<IAuthorizationHandler, BodyRequirementHandler>(); services.AddSingleton<IAuthorizationHandler, VoiceRequirementHandler>();
This code registers each requirement handler as a singleton using the built-in dependency injection (DI) framework in ASP.NET Core. An instance of the handler will be created when the application starts, and DI will inject the registered class into the relevant object.
Face Identification
The solution uses the Azure Cognitive Services for Vision API to identify a person’s face and body. For more information about Cognitive Services and details on the API, please visit bit.ly/2sxsqry.
The Vision API provides face attribute detection and face veri- fication. Face detection refers to the ability to detect human faces in an image. The API returns the rectangle coordinates of the location of the face within the processed image, and, optionally, can extract a series of face-related attributes such as head pose, gender, age, emotion, facial hair, and glasses. Face verification, in contrast, performs an authentication of a detected face against a person’s pre-saved face. Practically, it evaluates whether two faces belong to the same person. This is the specific API I use in this security project. To get started, please add the following NuGet package to your Visual Studio solution: Microsoft.Azure.Cognitive- Services.Vision.Face 2.2.0-preview
The .NET managed package is in preview, so make sure that you check the “Include prerelease” option when browsing NuGet, as shown in Figure 3.
Using the .NET package, face detection and recognition are straightforward. Broadly speaking, face recognition describes the work of comparing two different faces to determine if they’re similar or belong to the same person. The recognition operations mostly use the data structures listed in Figure 4.
The verification operation takes a face ID from a list of detected faces in an image (the DetectedFace collection) and determines whether the faces belong to the same person by comparing the ID against a collection of persisted faces (PersistedFace). Persisted face
images that have a unique ID and a name identify a Person. A group of persons can, optionally, be gathered in a PersonGroup in order to improve recognition performance. Basically, a person is a basic unit of identity and the person object can have one or more known faces registered. Each person is defined within a particular PersonGroup—a collection of people—and the identification is done against a PersonGroup. The security system would create one or more PersonGroup objects and then associate people
ASP.NET Core 3.0
public class FaceRequirementHandler : AuthorizationHandler<FaceRecognitionRequirement>
{
protected override Task HandleRequirementAsync(
AuthorizationHandlerContext context, FaceRecognitionRequirement requirement)
{
string siteId =
(context.Resource as HttpContext).Request.Query["siteId"]; IRecognition recognizer = new FaceRecognition();
if (recognizer.Recognize(siteId, out string name) >=
requirement.ConfidenceScore) {
context.User.AddIdentity(new ClaimsIdentity( new GenericIdentity(name)));
context.Succeed(requirement); }
return Task.CompletedTask; }
}
Each requirement is managed by an authorization handler, like the one in Figure 2, which is responsible for the evaluation of a policy requirement. You can choose to have a single handler for all requirements, or separate handlers for each requirement. This lat- ter approach is more flexible as it allows you to configure a gradient of authorization requirements that you can easily configure in the Startup class. The face, body and voice requirement handlers extend the AuthorizationHandler<TRequirement> abstract class, where TRequirement is the requirement to be handled. Because I want to evaluate three requirements, I need to write a custom handler that extends AuthorizationHandler for FaceRecognitionRequirement, BodyRecognitionRequirement and VoiceRecognitionRequirement each. Specifically, the HandleRequirementAsync method, which determines whether an authorization requirement is met. This method, as it’s asynchronous, doesn’t return a real value, except to indicate that the task has completed. Handling authorization consists of marking a requirement as “successful” by invoking the Succeed method on the authorization handler context. This is actually verified by a “recognizer” object, which uses the Cognitive Services API internally (more in the next section). The recognition action, performed by the Recognize method, obtains the name of the identified person and returns a value (score) that expresses the level of confidence that the identification is more (value closer to one) or less (value closer to zero) accurate. An expected level was specified in the API setup. You can tune this value to whatever threshold is appropriate for your solution.
Besides evaluating the specific requirement, the authorization handler also adds an identity claim to the current user. When an
Figure 3 NuGet Package for the Face API 20 msdn magazine