Page 68 - Security Today, September 2018
P. 68

Seeing the Potential The core of learning: shallow or deep
BAy Keven Marier
rtificial Intelligence is an all-encompassing category covering various things, in- cluding a wide range of neu- ral networks with different
capabilities. Neural networks are segmented by how they approach a particular, un- structured data set or problem, either with a process, algorithm or machine learning approach.
Due to the previous limitations of hard- ware processing power, machine learning could only deploy shallow learning of very large data sets. This shallow learning looks at data in just three dimensions. With recent, significant advances in processing power of graphical processing units (GPUs), we now can use a deep learning approach where we can look at data in many more levels or di- mensions—hence the word “deep.”
Milestone has moved to this new, GPU- compute platform by re-coding software to use a new type of coding called paralleliza- tion. Software parallelization is a coding technique for breaking a single problem into hundreds of smaller problems. The software can then run those 100 or 1,000 processes into 1,000 processing cores, in- stead of waiting for one core to process the data 1,000 times.
With parallelization, there is a quantum leap forward in how fast we can solve a prob- lem and the faster a problem can be solved, the deeper you can go with a problem or the deeper the data sets can be processed.
IoT Frameworks
Milestone’s role as a video management plat- form company is in aggregation to develop broad support for all relevant devices. The company has a vision to support all top IoT frameworks. What is a framework? ONVIF, for example, is an IoT framework. Many think of it as a camera firmware standard, but it is actually an IoT standard. If the cam- era device is an IoT device, then that is an IoT framework.
One focus is to continue enabling more of those devices across different frame- works into a common data center. Then, we will continue to advance GPU technology to create a whole new level of processing, helping companies that are using GPU as a parallelization, like BriefCam, to run those functions right on the unused GPUs already in the hardware.
50
0918 | SECURITY TODAY
NVIDIA invented the GPU, and they are driving machine-to-machine communication at an exponential rate. One of NVIDIA’s GPUs offers 5,000 cores (meaning 5,000 problems can be processed in a nanosecond). Milestone is hardly using any of those cores yet. They are decoding the video and detect- ing slight motion, but then there’s still a sig- nificant amount of resources available.
By allowing companies to plug right into that pipeline, the VMS can process all of their data without having any hardware plugs. The processing power is there and with it you can extract all the middle data out, have that aggregated and start creating automation. From there, you can create new types of vi- sual presentations for this information.
Advanced rendering is about creating a whole different type of mixed reality. Some data are artificial, some data are real. As humans, we will use both to create a more interesting picture of a problem. The Brief- Cam Synopsis system is an example of mixed reality. It uses real video, extracts ob- jects of interest and then provides an over- lay of augmented reality. Humans cannot look at 24 hours of video in nine minutes
Pasuwan/Shutterstock.com
but with Synopsis, our intelligence can be augmented.
Actualized Potential of Augmentation
AI and machine learning are being applied for AI-enabled devices and machines to get very good, low-cognitive functions. For ex- ample, humans cannot sit and watch all cameras simultaneously, all the time; our attention does not work that way. However, machines are extremely good and detailed at this. We do not see pixels, we see objects. The machine sees the most finite detail available to it, which is the pixel, and within the pixel, it can see more details, which are the shade of colors of that image. By aggregating data, allowing machines to automate responses and solutions, we can augment human inter- action and our environment.
Everything is about to change. Just in how we review and use video and data, we are going to see massive advancements. Imagine an interaction between a near-eye lens, medium-distance viewing glass and large video screens. There may be an over- lay of detailed text data on my small lens,
ARTIFICIAL INTELLIGENCE


































































































   66   67   68   69   70