Page 42 - Security Today, September/October 2021
P. 42

When it comes to protecting assets and people, real-time alerts generated by a video management system (VMS) enable security teams to be more proactive rather than reactive as events unfold in real time. Because AI-powered analytics eliminate false alarms, they can more accurately determine incidents that require further investigation by operators. Thanks to the extra data AI-based cameras can capture, analytic rules can be enhanced with more sophisticated logic and customization for precisely what an end user requires. For example, we can tell a camera to ignore all cars, but to alert us when a person comes to the door. AI can help us count objects like people or cars more precisely than ever. This includes the ability to count objects accurately even when they partly “occlude” or pass in front of each other. This is key since it allows use cases like people counting from more sensible camera view angles. This is far superior to conventional people counting techniques, which require a top-down view to avoid occlusion, and which give a less useful camera view when you want to identify faces as well.
When it comes to post-event forensic searches, AI-based cameras are in a league of their own. Additional descriptive metadata about objects is captured within each frame. Because the metadata is small, it adds very little to the overall bandwidth and storage requirements.
That metadata, which might include descriptive characteristics of objects like the color of a person’s shirt or pants or their approximate age and gender, enables a VMS operator to quickly search through video to find a particular object or person. A search that might have taken security staff hours or days to complete now takes only seconds when the search includes additional metadata provided by an AI camera.
Although people counting, heat maps and queue management analytics have existed for some time, they too have been subject to the inherent inaccuracies of pixel-based motion detection. Conversely, AI-based object detection delivers profoundly accurate data and metrics for operations, sales and marketing teams looking for insight on everything from retail store performance to ensuring process efficiency and operational compliance.
As a result, these cameras have become an indispensable tool for business operations. Depending on the business, the value proposition for such data can be a game-changer worth many times the cost of the system.
For customers with more sophisticated data analysis needs, camera metadata is accessed and combined with other data. It is processed by other platforms for sophisticated visualization and data mining. This allows technology partners to access the aggregated data into their own charts, graphs and exception reports powered by specialized software they may already be using. There are familiar use cases spanning multiple industries that require linking data from access control, intrusion, point of sale systems, staffing data, schedule data, weather data and many other data sources. The potential for this unified data to create comprehensive business solutions is substantial.
“It is also important to remember that what differentiates today’s technology from true AI is that machine learning and deep learning algorithms cannot learn new things by themselves.”
Since video cameras are already well accepted and commonplace, the opportunities for them to evolve into unobtrusive, important data gathering tools for business and operations intelligence will only continue to grow. Operations and marketing departments may also find common ground when budgeting for a system that can serve the needs of both departments.
AI-based analytics can run on the edge or in a server, but there are significant aspects to each method of deployment that should be considered. With AI on the edge, valuable events and other metadata generated at the camera must be gathered from many endpoints and that data must be aggregated together to enable clear visualization of the trends and anomalies identified. This can be done on a lightweight local server that also runs the VMS. Running AI analytics at the camera significantly reduces the overall cost of the equivalent server resources required to run AI-based analytics since edge-based analytics run before video is compressed and streamed.
Running AI on a server requires that the video stream be first decoded which requires CPU/GPU resources that can scale dramatically as stream count increases. While the power of a server far outweighs what a camera can provide, there is a point of diminishing returns when electing to do everything on a server for all, but the most demanding processing. For that reason, a hybrid approach, in which AI analytics are performed on the edge and the lightweight data results are sent
to an inexpensive server or workstation for
aggregation and display, will remain a popular
choice for some time.
Aaron Saks is the product and technical manager at Hanwha Techwin America.

   40   41   42   43   44