Page 20 - Security Today, March 2019
P. 20

The Basics of Deep Learning
Most modern deep learning models are based on an artificial neural network, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks.
In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recogni- tion application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. (Of course, this does not completely obviate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.)
The “deep” in “deep learning” refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized).
For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth > 2. CAP of depth 2 has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that more layers do not add to the function approximator ability of the network. Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers help in learning features.
Deep learning architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance.
“As camera counts and the data they provide grow ever-larger, it becomes increasingly difficult for organizations to monitor, perform investi- gations and draw useful conclusion from the valuable information gathered by their video surveillance infrastructure,” said Brian Carle, director of product strategy at Salient Systems. “Video analytics have long been seen as a technology solution to help identify activity and information from all the video data. Video analytics have largely fallen short of delivering on that market expectation. Deep learning can change that.”
For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate repre- sentations akin to principal components, and derive layered structures that remove redundancy in representation.
Most modern deep learning models are based on an artificial neural network, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks.
“Most Video Content Analytics developed to day have been based on traditional, algorithmic Machine Learning techniques. Deep Learning is a more advanced evolution of machine learning, using sophisticated, artificial neural networks,” Carle said. “In the context of VCA, both Machine Learn- ing and Deep Learning instruct software to develop a model of objects based on a variety of attributes the software “learns” about those objects.
”The model helps the software to later identify and categorize an object in the video feed which matches those attributes the software has learned. For instance, an object moving through the camera’s field of view may be taller than it is wide, as opposed to another object, which is wider than it is tall. The VCA software may classify the first object as a person and the second as a vehicle, based on those attributes.”
Genetec Citigraf is another example of one of the products that leverages advanced machine learning algorithms to estimate how different types of crime influence the risk of other crimes occurring in the future. For example, it can determine how close in time and space a robbery has to occur to your home to increase the risk of your home being robbed. In this case, there is no “ground truth” in the original problem and the answers are learned from the data. To allow us to pre-emptively handle failures before they occur, we are also using unsupervised machine learning to help our systems predict when they will become unstable. Currently, Security Center provides warnings when you have used 90 percent of the available disk space. Our goal is to have the system inform you that you will exceed available disk space in x number of days when you are only at 10 percent usage.
Limitations
While advances in deep learning can help us realize greater opera- tional efficiency, it is important to acknowledge that deep learning is not the ‘be-all and end-all’ associated with AI. In fact, it cannot yet teach itself new tasks nor automatically make sense of data through unsupervised learning.
It also has certain limitations. For instance, it can be difficult for us- ers to interpret deep learning methods when trying to identify the steps that a machine takes to take a decision. In addition, it is still limited since it requires a lot of data for training in order to capture complex trends.
18
However, we are seeing significant benefits with our current applications.
Data Stewardship
When it comes to machine learning, at Genetec our aim is to provide end-users with highly analyzed data that will guide them towards making accurate, critical decisions for verified results. The explorato- ry phases of creating a data science-based solution begins with data. In order to be granted access to data, we need to demonstrate to our customers and partners that we are proper stewards of data and can be trusted with it. An important step towards building that trust is being clear about what data science can and cannot do, which starts with debunking popular assumptions.
We are still a long way away from true AI—machines cannot give meaning to, or make sense of something, on their own. Applications can be developed to use pre-programmed algorithms to discover pat- terns in data or trained to correctly recognize and classify different inputs. Working with these algorithms we can also allow them to make their own improvements to perform their
tasks more efficiently. This is a starting point to- wards unlocking the potential of machine learn- ing that will allow us to find even more innovative approaches to protecting our everyday and devel- oping safer, more secure environments.
Sean Lawlor is a data scientist at Genetec Inc.
0319 | SECURITY TODAY
DEEP LEARNING











































































   18   19   20   21   22