Page 25 - MSDN Magazine, June 2019
P. 25

with them. Once a group is created, the PersonGroup collection must be trained before an identification can be performed using it. Moreover, it has to be retrained after adding or removing any per- son, or if any person has their registered face edited. The training is done by the PersonGroup Train API. When using the client library, this is simply a call to the TrainPersonGroupAsync method:
await faceServiceClient.TrainPersonGroupAsync(personGroupId);
Training is an asynchronous process. It may not be finished even after the TrainPersonGroupAsync method returns. You may need to query the training status with the GetPersonGroupTraining- StatusAsync method until it’s ready before progressing with face detection or verification.
When performing face verification, the Face API computes the similarity of a detected face among all the faces within a group, and returns the most comparable person(s) for that test face. This is done through the IdentifyAsync method of the client library. The test face needs to be detected using the aforementioned steps, and the face ID is then passed to the Identify API as a second argument. Multi- ple face IDs can be identified at once, and the result will contain all the Identify results. By default, Identify returns only one person that matches the test face best. If you prefer, you can specify the optional parameter maxNumOfCandidatesReturned to let Identify return more candidates. The code in Figure 5 demonstrates the process of identifying and verifying a face:
First, you need to obtain a client object for the Face API by passing your subscription key and the API endpoint. You can obtain both values from your Azure Portal where you provisioned the Face API service. You then detect any face visible in an image, passed as a stream to the DetectWithStreamAsync method of the client’s Face object. The Face object implements the detection and verification operations for the Face API. From the detected faces, I ensure that only one is actually detected, and obtain its ID—its unique identifier in the registered face collection of all authorized people to access that site. The IdentifyAsync method then performs the identification of the detected face within a PersonGroup, and returns a list of best matches, or candidates, sorted by confidence level. With the person ID of the first candidate, I retrieve the
Figure 4 Data Structures for the Face API
person name, which is eventually returned to the Access Web API. The face authorization requirement is met.
Voice Recognition
The Azure Cognitive Services Speaker Recognition API provides algorithms for speaker verification and speaker identification. Voices have unique characteristics that can be used to identify a person, just like a fingerprint. The security solution in this article uses voice as a signal for access control, where the subject says a pass phrase into a microphone registered as an IoT device. Just as with face recognition, voice recognition also requires a pre-enrollment of authorized people. The Speaker API calls an enrolled person a “Profile.” When enrolling a profile, the speaker’s voice is recorded saying a specific phrase, then a number of features are extracted and the chosen phrase is recognized. Together, both extracted fea- tures and the chosen phrase form a unique voice signature. During verification, an input voice and phrase are compared against the enrollment’s voice signature and phrase, in order to verify whether they’re from the same person and the phrase is correct.
Looking at the code implementation, the Speaker API doesn’t benefit from a managed package in NuGet like the Face API, so
Figure 5 The Face Recognition Process
public class FaceRecognition : IRecognition {
public double Recognize(string siteId, out string name) {
FaceClient faceClient = new FaceClient(
new ApiKeyServiceClientCredentials("<Subscription Key>"))
{
Endpoint = "<API Endpoint>"
};
ReadImageStream(siteId, out Stream imageStream);
// Detect faces in the image IList<DetectedFace> detectedFaces =
faceClient.Face.DetectWithStreamAsync(imageStream).Result;
// Too many faces detected if (detectedFaces.Count > 1) {
name = string.Empty;
return 0; }
IList<Guid> faceIds = detectedFaces.Select(f => f.FaceId.Value).ToList();
// Identify faces IList<IdentifyResult> identifiedFaces =
faceClient.Face.IdentifyAsync(faceIds, "<Person Group ID>").Result;
// No faces identified
if (identifiedFaces.Count == 0) {
name = string.Empty;
return 0; }
// Get the first candidate (candidates are ranked by confidence) IdentifyCandidate candidate =
identifiedFaces.Single().Candidates.FirstOrDefault();
// Find the person Person person =
faceClient.PersonGroupPerson.GetAsync("", candidate.PersonId).Result; name = person.Name;
return candidate.Confidence; }
Name
Description
DetectedFace
This is a single face representation retrieved by the face detection operation. Its ID expires 24 hours after it’s created.
PersistedFace
When DetectedFace objects are added to a
group (such as FaceList or Person), they become PersistedFace objects, which can be retrieved at any time and do not expire.
FaceList/ LargeFaceList
This is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string and, optionally, a user data string.
Person
This is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string and, optionally, a user data string.
PersonGroup/ LargePersonGroup
This is an assorted list of Person objects. It has a unique ID, a name string and, optionally, a user data string. A PersonGroup must be trained before it can be used in recognition operations.
msdnmagazine.com
June 2019 21


































































































   23   24   25   26   27