Page 36 - MSDN Magazine, November 2019
P. 36
Finding Face Landmarks
Once a face is detected, the next step is to determine the coordi- nates of common facial features in the image. There are 68 landmark points on the human face that are of interest to most face detection algorithms. These points include the nose, the mouth, eyebrows, jaw and more. Figure 1 depicts highlighted facial landmark points.
This level of detail provides data for two useful purposes. First, the algorithm can determine the orientation of the face. In a two- dimensional image, the locations of the 68 landmark points can get distorted by the roll, tilt and angle of the face. The algorithm’s ability to infer this information is how popular messaging apps can do things like place virtual sunglasses over people’s eyes or apply makeup on a face and sync its movements with the person as he or she moves. Second, this collection of facial landmark points is used to identify an individual. The distance between these points is a unique characteristic and algorithms can be trained to recognize an individual to a degree of accuracy rivaling humans.
Coding a Face Detection System
Let’s start by creating a Python 3 notebook on your preferred platform (I covered Jupyter Notebooks in a previous column at msdn.com/magazine/mt829269). Create an empty cell, enter the follow- ing code to enable inline display of images and execute the cell:
%matplotlib inline
There are multiple libraries that perform face detection and face recognition. For this article, I chose to work with the face_ recognition library at pypi.org/project/face_recognition. Create a new cell and enter the following code to install it:
! pip install face_recognition
This process may take a few moments as the package downloads, compiles and installs. Enter the following code into a new cell and execute it to import the required libraries:
from matplotlib.pyplot import imshow import numpy as np
import PIL.Image
import PIL.ImageDraw
import face_recognition
Next, enter the code in Figure 2 to create a function that takes an image file and runs it through the face_recognition library.
After executing that code, create a new cell and enter the following:
findFaces("frank.jpg")
The output should read:
1 face(s) in this image
Face coordinates: Top: 172, Left: 171, Bottom: 726, Right: 726
Feel free to test the algorithm with your own images. In the project on the Azure Notebook service, I have several images in the project directory to test with, including a crowd image with multiple faces.
Next, enter the following code to display all the face landmarks in an image:
faceImage = face_recognition.load_image_file("frank.jpg") face_landmarks = face_recognition.face_landmarks(image) print(face_landmarks)
The result should display sets of coordinates labeled “left eye- brow,” “nose tip” and so on. Again, feel free to experiment with your own images.
As stated earlier, detecting faces and finding landmarks are the precursors to face recognition. Let’s now use the face_recognition library to compare two images—frank.jpg and frank2.jpg—to see
if they’re the same person. Enter the following code into a new cell and execute it:
known_image = face_recognition.load_image_file("frank.jpg") mystery_image = face_recognition.load_image_file("frank2.jpg")
frank_encoding = face_recognition.face_encodings(known_image)[0] mystery_encoding = face_recognition.face_encodings(mystery_image)[0]
results = face_recognition.compare_faces([frank_encoding], mystery_encoding)
print (results)
Not surprisingly (as both images are of me) the code returns a result with the Boolean value of true. Next, enter the following code into a new cell and execute it:
mystery_image2 = face_recognition.load_image_file("andy.jpg") mystery_encoding2 = face_recognition.face_encodings(mystery_image2)[0]
results = face_recognition.compare_faces([frank_encoding], mystery_encoding2)
print (results)
In this instance, I’ve introduced the image (andy.jpg) of another person, and compared it to the image of frank.jpg. And no sur- prise, the result comes back as false. Thus far we’ve only compared images containing one face. What about more complicated images? Let’s now apply facial recognition to images depicting a crowd of people. Enter the following code into a new cell and execute it:
crowd_image = face_recognition.load_image_file("crowd.jpg") crowd_encoding = face_recognition.face_encodings(crowd_image) for encoding in crowd_encoding:
is_frank_in_the_crowd = face_recognition.compare_faces([frank_encoding], encoding) print(is_frank_in_the_crowd)
The answer comes back as false 21 times, which is correct as there are 21 faces detected in that image and I am not in the picture. Feel free to experiment with various pictures of your own to see what kind of results you can get.
Wrapping Up
Our faces, by definition, are personally identifiable information (PII). So it comes as little surprise that work in the field of face rec- ognition has stirred up controversy. The Viola-Jones algorithm was a landmark discovery that led to widespread innovation in every- thing from digital cameras to surveillance systems, yielding deep concerns about privacy and civil liberties (to the point that some jurisdictions have banned use of face recognition by law enforce- ment). Given our relentless march toward a more connected and data-driven society, concerns around this technology—especially in the context of AI-driven systems—will only intensify.
Figure 2 Code for the findFaces Function
def findFaces(imageName):
image = face_recognition.load_image_file(imageName)
face_locations = face_recognition.face_locations(image)
number_of_faces = len(face_locations)
print("{} face(s) in this image".format(number_of_faces))
pil_image = PIL.Image.fromarray(image) for face_location in face_locations:
top, right, bottom, left = face_location
print("Face coordinates: Top: {}, Left: {}, Bottom: {},
Right: {}".format(top, left, bottom, right)) imshow(np.asarray(pil_image))
24 msdn magazine
Artificially Intelligent