Page 18 - Campus Security & Life Safety, March/April 2023
P. 18

 Artificial Intelligence
                                 els to disk. They can’t identify the objects they capture or store any data about them. With traditional motion analytics, if the camera sensor detects movement in those pixels, it can place a bookmark in the recording or send an alert. We can draw a box or a line and get a noti- fication any time those pixels move. As anyone who has tried these analytics has experienced, traditional motion-based analytics are very prone to false positives. They are almost entirely useless for any cam- eras that are either outside or near a window, since any light changes will “trigger” them. For this reason, security professionals have had to relegate pixel-based motion analytics to specific indoor-only deploy- ments. The initial hype that accompanied the first motion analytics was soon discovered to be just that. It’s no surprise then that security pro- fessionals have been slow to rush towards AI-based analytics for fear of being over-promised and under-delivered again.
What Makes a Camera “Smart”?
Just like your smart speaker is able to recognize your voice or a Tesla is aware of cars around it, machine learning and deep learning algo- rithms, both subsets of AI, can be trained to recognize and respond to objects in their environment. Such “AI-enabled” smart devices are revolutionizing the Internet of Things (IoT) market. For an AI- enabled security camera, these same algorithms have been trained to recognize and classify objects as a person or a vehicle. They are also able to detect specific attributes of those objects such as color, size, and type such as a motorcycle, truck or sedan. They can detect objects carried or worn such as hats, bags, glasses, backpacks, all as a way to help to describe the detected object more accurately. Suddenly, we’ve blown past pixel-based motion to a world where we can install a cam- era at the primary entrance of a school and ask it to tell us when a person comes to the door. And now we can be 99% sure that it won’t be a false alarm. Best of all, this “smart” AI-technology fits inside a security camera and requires no expensive servers to function. Hence the term “edge-based” processing power.
Proactive vs. Reactive
While AI-based object detection has fundamentally changed the rules for analytics, allowing cameras to be installed anywhere with little risk of generating false alarms, it’s also ushering in completely new ways to think about security and the role of video surveillance on campus. Traditionally, video surveillance has been used to search and review events that have taken place in the past. The new breed of AI- based cameras helps security organizations adopt a more proactive stance, notifying operators of events as they happen in real time. Why is a person loitering at the football stadium at 4 a.m.? Why are cars driving the wrong way down a one-way street? For many campus security teams, these types of capabilities couldn’t arrive at a more opportune time. Emerging from the pandemic, resources and man- power remain in short supply. These new AI-based cameras function like another pair of eyes that never sleep. On an ordinary day, they simply enable security professionals to do a better job with less resources. On an extraordinary day, instead of merely recording events, they can help prevent events from occurring or escalating by providing timely notifications to security personnel when things are happening that seem out of the ordinary.
Deploying a Successful AI-Based Video Surveillance System
With all the power and potential of an AI-based video system, it’s important to realize that not all systems are the same. The implemen- tation of the technology can vary widely between manufacturers and might not be appropriate for campus security use cases. For example, some AI-enabled cameras only detect objects but not the attributes
that distinguish those objects from others. Looking for a man with a white shirt and red shoes? Make sure the system you invest in can detect colors and other attributes and store this data, called metadata, along with the video. The type and number of attributes that are cap- tured also vary. There are very few systems in the market that can detect shoe color, for example.
Most importantly, the descriptive metadata captured from an AI- based camera is only useful if it can be interpreted and displayed within a customer’s chosen video management system (VMS) or net- work video recorder (NVR). It’s imperative to ensure a “match” between an AI-enabled camera(s) and the VMS. The VMS needs to be able to not only display the camera output but also search or react to all the AI metadata it receives. For example, i-PRO’s Active Guard plug-in unlocks forensic search capabilities and delivers the most attributes to popular VMSs such as Genetec, Milestone, and Video Insight.
It also pays to remember that color is a critical component of suc- cessful use of AI analytics. To keep these analytics working their best as the light recedes, colors that define objects need to be maintained. This requires the finest low light performance available from a lens/ sensor combination. The goal should be to keep the camera capturing in color for as long as possible. If a camera turns on infrared (IR) to illuminate a scene, color information is lost since most cameras auto- matically change to black and white mode. Make sure any camera you choose makes the best use of available light sources and has a very good low light rating. AI-based technology can also be used to enhance a camera’s image processing system by removing noise in low light. Features such as wide dynamic range, advanced intelligent auto (which automatically adjusts shutter speed, auto exposure, and gamma correction) can further enhance images.
What’s Next with Analytics?
The great thing about these AI-based cameras is that they have the potential to evolve with the times. Like the apps we have on our smartphones, some AI-based cameras can install AI apps to suit the unique requirements of their environment. In a campus environment, it might be useful for some cameras in parking lots to be fitted with license plate recognition (LPR) analytics. Other cameras might be suited to measure occupancy levels or collect data on congestion in cafeterias to better manage staffing. Not all cameras have an open platform that allows the use of a wide range of analytics to suit customized needs, but if this is important to you, seek out camera manufacturers that embrace open platforms and promote integration with several analytic partners.
There’s an exciting new area of analytics that you’ll be hearing more about soon. Known as scene-change analytics, this type of analytic considers a scene in its entirety and catalogs what it normally looks like. If something in the scene changes and remains changed after a preset time limit, the analytic will then detect the anomaly and send out a notification. For example, you might have a room with an outer door that is normally closed. While it’s not a problem if the door is opened occasionally, if the door is left open continuously for over 10 minutes, that might be cause for a notification to be sent to security. The possibilities are endless, and this represents the next evolution of edge-based analytics in security cameras.
If you’re not already taking advantage of this powerful technology to enhance your campus security, it’s a great time to upgrade. AI-based cameras are rapidly becoming the standard, and in the not-too-distant future, it may be hard to find cameras without such features.
Amy Bolin is the Sales Development Manager at i-PRO Americas Inc.
  18 campuslifesecurity.com | MARCH/APRIL 2023


















































































   16   17   18   19   20